Test Report: QEMU_macOS 19373

                    
                      afa0c1cf199b27e59d48f8572184259dc9d34cb2:2024-08-05:35664
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.41
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.27
55 TestCertOptions 10.12
56 TestCertExpiration 195.13
57 TestDockerFlags 10.22
58 TestForceSystemdFlag 10.2
59 TestForceSystemdEnv 10.44
104 TestFunctional/parallel/ServiceCmdConnect 34.34
176 TestMultiControlPlane/serial/StopSecondaryNode 312.31
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.13
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.25
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.57
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
183 TestMultiControlPlane/serial/StopCluster 207.56
186 TestImageBuild/serial/Setup 9.93
189 TestJSONOutput/start/Command 10.16
195 TestJSONOutput/pause/Command 0.08
201 TestJSONOutput/unpause/Command 0.04
218 TestMinikubeProfile 10.13
221 TestMountStart/serial/StartWithMountFirst 10.22
224 TestMultiNode/serial/FreshStart2Nodes 9.97
225 TestMultiNode/serial/DeployApp2Nodes 94.7
226 TestMultiNode/serial/PingHostFrom2Pods 0.09
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.08
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.13
232 TestMultiNode/serial/StartAfterStop 48.22
233 TestMultiNode/serial/RestartKeepsNodes 7.48
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 2.18
236 TestMultiNode/serial/RestartMultiNode 5.25
237 TestMultiNode/serial/ValidateNameConflict 20.03
241 TestPreload 10.09
243 TestScheduledStopUnix 9.88
244 TestSkaffold 12.37
247 TestRunningBinaryUpgrade 599.47
249 TestKubernetesUpgrade 18.72
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.74
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
265 TestStoppedBinaryUpgrade/Upgrade 577.68
267 TestPause/serial/Start 9.87
277 TestNoKubernetes/serial/StartWithK8s 10.09
278 TestNoKubernetes/serial/StartWithStopK8s 5.31
279 TestNoKubernetes/serial/Start 5.3
283 TestNoKubernetes/serial/StartNoArgs 5.32
285 TestNetworkPlugins/group/auto/Start 9.81
286 TestNetworkPlugins/group/kindnet/Start 9.99
287 TestNetworkPlugins/group/calico/Start 9.94
288 TestNetworkPlugins/group/custom-flannel/Start 9.73
289 TestNetworkPlugins/group/false/Start 9.86
290 TestNetworkPlugins/group/enable-default-cni/Start 9.81
291 TestNetworkPlugins/group/flannel/Start 9.86
292 TestNetworkPlugins/group/bridge/Start 9.9
293 TestNetworkPlugins/group/kubenet/Start 9.84
296 TestStartStop/group/old-k8s-version/serial/FirstStart 9.86
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.09
307 TestStartStop/group/no-preload/serial/FirstStart 9.96
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/embed-certs/serial/FirstStart 9.98
314 TestStartStop/group/no-preload/serial/SecondStart 5.97
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
318 TestStartStop/group/no-preload/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.53
321 TestStartStop/group/embed-certs/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
325 TestStartStop/group/embed-certs/serial/SecondStart 6.1
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
331 TestStartStop/group/embed-certs/serial/Pause 0.11
334 TestStartStop/group/newest-cni/serial/FirstStart 9.94
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.77
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.26
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-532000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-532000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.412459292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0475ec1-6bf2-4748-b8e9-3c91492edfb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-532000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb5f2cc3-3edc-4705-8f6b-38e2c2263c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"195e38af-be45-4d79-97a1-08305c41edbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig"}}
	{"specversion":"1.0","id":"99d7fdfc-4430-4bb8-8cb8-9acb3d2e41d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5b300a88-db35-4cf3-aa48-d5688b6e7ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4385e59e-8d5c-4b8b-a4fc-a06064b8ae83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube"}}
	{"specversion":"1.0","id":"b29ddb3e-7890-4ef1-9eaa-1b3adab86808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6d0b7e39-b450-469f-96f8-aa1c0abc8f8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"94b42fc0-2431-417d-8e98-53931630e20d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"101015ee-c0a7-4e6b-bf3d-714cbda1817d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd431e7b-ccd6-4b62-b54a-2de24ebb8197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-532000\" primary control-plane node in \"download-only-532000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c17e8096-249c-40c1-adf9-e69bc440c1b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"42322827-e294-4598-bd44-0cd7560604fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20] Decompressors:map[bz2:0x1400000f5e0 gz:0x1400000f5e8 tar:0x1400000f590 tar.bz2:0x1400000f5a0 tar.gz:0x1400000f5b0 tar.xz:0x1400000f5c0 tar.zst:0x1400000f5d0 tbz2:0x1400000f5a0 tgz:0x14
00000f5b0 txz:0x1400000f5c0 tzst:0x1400000f5d0 xz:0x1400000f5f0 zip:0x1400000f600 zst:0x1400000f5f8] Getters:map[file:0x1400089c850 http:0x14000816320 https:0x14000816370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"e77577ad-0c08-401f-80a1-3c04ba869ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 15:46:49.744180    1553 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:46:49.744355    1553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:46:49.744358    1553 out.go:304] Setting ErrFile to fd 2...
	I0805 15:46:49.744360    1553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:46:49.744493    1553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	W0805 15:46:49.744582    1553 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19373-1054/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19373-1054/.minikube/config/config.json: no such file or directory
	I0805 15:46:49.745966    1553 out.go:298] Setting JSON to true
	I0805 15:46:49.765747    1553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":980,"bootTime":1722897029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:46:49.765846    1553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:46:49.772569    1553 out.go:97] [download-only-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:46:49.772726    1553 notify.go:220] Checking for updates...
	W0805 15:46:49.772734    1553 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 15:46:49.776493    1553 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:46:49.779384    1553 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:46:49.785611    1553 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:46:49.788517    1553 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:46:49.792216    1553 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	W0805 15:46:49.799527    1553 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:46:49.799771    1553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:46:49.804174    1553 out.go:97] Using the qemu2 driver based on user configuration
	I0805 15:46:49.804192    1553 start.go:297] selected driver: qemu2
	I0805 15:46:49.804206    1553 start.go:901] validating driver "qemu2" against <nil>
	I0805 15:46:49.804257    1553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:46:49.809104    1553 out.go:169] Automatically selected the socket_vmnet network
	I0805 15:46:49.815727    1553 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 15:46:49.815819    1553 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:46:49.815882    1553 cni.go:84] Creating CNI manager for ""
	I0805 15:46:49.815899    1553 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 15:46:49.815951    1553 start.go:340] cluster config:
	{Name:download-only-532000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:46:49.821429    1553 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:46:49.826135    1553 out.go:97] Downloading VM boot image ...
	I0805 15:46:49.826152    1553 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 15:46:58.636573    1553 out.go:97] Starting "download-only-532000" primary control-plane node in "download-only-532000" cluster
	I0805 15:46:58.636602    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:46:58.696082    1553 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 15:46:58.696103    1553 cache.go:56] Caching tarball of preloaded images
	I0805 15:46:58.696317    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:46:58.701442    1553 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 15:46:58.701450    1553 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:46:58.797960    1553 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 15:47:05.804579    1553 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:05.804765    1553 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:06.501130    1553 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 15:47:06.501333    1553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-532000/config.json ...
	I0805 15:47:06.501352    1553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-532000/config.json: {Name:mk3cabbd89337a06e6e35d69d98fb82611c24728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:06.501612    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:47:06.501820    1553 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 15:47:07.088151    1553 out.go:169] 
	W0805 15:47:07.092282    1553 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20] Decompressors:map[bz2:0x1400000f5e0 gz:0x1400000f5e8 tar:0x1400000f590 tar.bz2:0x1400000f5a0 tar.gz:0x1400000f5b0 tar.xz:0x1400000f5c0 tar.zst:0x1400000f5d0 tbz2:0x1400000f5a0 tgz:0x1400000f5b0 txz:0x1400000f5c0 tzst:0x1400000f5d0 xz:0x1400000f5f0 zip:0x1400000f600 zst:0x1400000f5f8] Getters:map[file:0x1400089c850 http:0x14000816320 https:0x14000816370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 15:47:07.092308    1553 out_reason.go:110] 
	W0805 15:47:07.100255    1553 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 15:47:07.104051    1553 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-532000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-951000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-951000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.120078083s)

                                                
                                                
-- stdout --
	* [offline-docker-951000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-951000" primary control-plane node in "offline-docker-951000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-951000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:33:08.474264    4095 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:33:08.474402    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:08.474405    4095 out.go:304] Setting ErrFile to fd 2...
	I0805 16:33:08.474407    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:08.474544    4095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:33:08.475737    4095 out.go:298] Setting JSON to false
	I0805 16:33:08.493533    4095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3759,"bootTime":1722897029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:33:08.493612    4095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:33:08.499090    4095 out.go:177] * [offline-docker-951000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:33:08.506982    4095 notify.go:220] Checking for updates...
	I0805 16:33:08.510955    4095 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:33:08.513904    4095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:33:08.516915    4095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:33:08.519889    4095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:33:08.522931    4095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:33:08.525952    4095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:33:08.529283    4095 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:08.529333    4095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:33:08.532875    4095 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:33:08.539951    4095 start.go:297] selected driver: qemu2
	I0805 16:33:08.539962    4095 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:33:08.539971    4095 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:33:08.541980    4095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:33:08.544864    4095 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:33:08.548012    4095 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:33:08.548030    4095 cni.go:84] Creating CNI manager for ""
	I0805 16:33:08.548037    4095 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:33:08.548041    4095 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:33:08.548074    4095 start.go:340] cluster config:
	{Name:offline-docker-951000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-951000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:33:08.551815    4095 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:33:08.558931    4095 out.go:177] * Starting "offline-docker-951000" primary control-plane node in "offline-docker-951000" cluster
	I0805 16:33:08.562894    4095 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:33:08.562920    4095 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:33:08.562931    4095 cache.go:56] Caching tarball of preloaded images
	I0805 16:33:08.563002    4095 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:33:08.563007    4095 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:33:08.563070    4095 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/offline-docker-951000/config.json ...
	I0805 16:33:08.563080    4095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/offline-docker-951000/config.json: {Name:mk4921d47ee5b56b0bcb6d61a917a29736039c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:33:08.563383    4095 start.go:360] acquireMachinesLock for offline-docker-951000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:08.563415    4095 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "offline-docker-951000"
	I0805 16:33:08.563425    4095 start.go:93] Provisioning new machine with config: &{Name:offline-docker-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-951000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:08.563450    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:08.567861    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:08.584042    4095 start.go:159] libmachine.API.Create for "offline-docker-951000" (driver="qemu2")
	I0805 16:33:08.584088    4095 client.go:168] LocalClient.Create starting
	I0805 16:33:08.584161    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:08.584203    4095 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:08.584216    4095 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:08.584262    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:08.584286    4095 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:08.584296    4095 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:08.584676    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:08.731736    4095 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:09.141270    4095 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:09.141284    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:09.141477    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:09.151403    4095 main.go:141] libmachine: STDOUT: 
	I0805 16:33:09.151428    4095 main.go:141] libmachine: STDERR: 
	I0805 16:33:09.151497    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2 +20000M
	I0805 16:33:09.160320    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:09.160341    4095 main.go:141] libmachine: STDERR: 
	I0805 16:33:09.160360    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:09.160368    4095 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:09.160375    4095 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:09.160413    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:09:bb:44:79:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:09.162146    4095 main.go:141] libmachine: STDOUT: 
	I0805 16:33:09.162159    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:09.162178    4095 client.go:171] duration metric: took 578.096875ms to LocalClient.Create
	I0805 16:33:11.164205    4095 start.go:128] duration metric: took 2.6008s to createHost
	I0805 16:33:11.164221    4095 start.go:83] releasing machines lock for "offline-docker-951000", held for 2.600854166s
	W0805 16:33:11.164238    4095 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:11.185177    4095 out.go:177] * Deleting "offline-docker-951000" in qemu2 ...
	W0805 16:33:11.205005    4095 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:11.205017    4095 start.go:729] Will try again in 5 seconds ...
	I0805 16:33:16.207078    4095 start.go:360] acquireMachinesLock for offline-docker-951000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:16.207488    4095 start.go:364] duration metric: took 327.292µs to acquireMachinesLock for "offline-docker-951000"
	I0805 16:33:16.207594    4095 start.go:93] Provisioning new machine with config: &{Name:offline-docker-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-951000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:16.207862    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:16.217440    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:16.265936    4095 start.go:159] libmachine.API.Create for "offline-docker-951000" (driver="qemu2")
	I0805 16:33:16.265986    4095 client.go:168] LocalClient.Create starting
	I0805 16:33:16.266135    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:16.266205    4095 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:16.266226    4095 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:16.266294    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:16.266338    4095 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:16.266351    4095 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:16.266856    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:16.423083    4095 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:16.509473    4095 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:16.509481    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:16.509657    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:16.519057    4095 main.go:141] libmachine: STDOUT: 
	I0805 16:33:16.519074    4095 main.go:141] libmachine: STDERR: 
	I0805 16:33:16.519143    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2 +20000M
	I0805 16:33:16.527068    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:16.527086    4095 main.go:141] libmachine: STDERR: 
	I0805 16:33:16.527095    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:16.527101    4095 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:16.527113    4095 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:16.527148    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:22:ee:11:5b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/offline-docker-951000/disk.qcow2
	I0805 16:33:16.528744    4095 main.go:141] libmachine: STDOUT: 
	I0805 16:33:16.528758    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:16.528771    4095 client.go:171] duration metric: took 262.784042ms to LocalClient.Create
	I0805 16:33:18.528958    4095 start.go:128] duration metric: took 2.321104833s to createHost
	I0805 16:33:18.529022    4095 start.go:83] releasing machines lock for "offline-docker-951000", held for 2.321558125s
	W0805 16:33:18.529448    4095 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-951000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-951000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:18.536989    4095 out.go:177] 
	W0805 16:33:18.541321    4095 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:33:18.541352    4095 out.go:239] * 
	* 
	W0805 16:33:18.543988    4095 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:33:18.553102    4095 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-951000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-05 16:33:18.567358 -0700 PDT m=+2788.999610959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-951000 -n offline-docker-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-951000 -n offline-docker-951000: exit status 7 (68.107041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-951000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-951000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-951000
--- FAIL: TestOffline (10.27s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-906000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-906000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.850032792s)

                                                
                                                
-- stdout --
	* [cert-options-906000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-906000" primary control-plane node in "cert-options-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-906000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-906000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-906000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.735584ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-906000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-906000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-906000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-906000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-906000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.01275ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-906000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-906000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-906000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-05 16:33:49.378128 -0700 PDT m=+2819.811001918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-906000 -n cert-options-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-906000 -n cert-options-906000: exit status 7 (29.5815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-906000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-906000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-906000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.796118333s)

                                                
                                                
-- stdout --
	* [cert-expiration-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-035000" primary control-plane node in "cert-expiration-035000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-035000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221654958s)

                                                
                                                
-- stdout --
	* [cert-expiration-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-035000" primary control-plane node in "cert-expiration-035000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-035000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-035000" primary control-plane node in "cert-expiration-035000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-05 16:36:49.421195 -0700 PDT m=+2999.857697584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-035000 -n cert-expiration-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-035000 -n cert-expiration-035000: exit status 7 (29.2295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-035000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-035000
--- FAIL: TestCertExpiration (195.13s)

                                                
                                    
x
+
TestDockerFlags (10.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-290000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-290000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.988157333s)

                                                
                                                
-- stdout --
	* [docker-flags-290000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-290000" primary control-plane node in "docker-flags-290000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-290000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:33:29.185071    4293 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:33:29.185200    4293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:29.185209    4293 out.go:304] Setting ErrFile to fd 2...
	I0805 16:33:29.185212    4293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:29.185332    4293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:33:29.186360    4293 out.go:298] Setting JSON to false
	I0805 16:33:29.202504    4293 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3780,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:33:29.202573    4293 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:33:29.207570    4293 out.go:177] * [docker-flags-290000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:33:29.214260    4293 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:33:29.214314    4293 notify.go:220] Checking for updates...
	I0805 16:33:29.221359    4293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:33:29.222742    4293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:33:29.225336    4293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:33:29.228369    4293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:33:29.231412    4293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:33:29.234632    4293 config.go:182] Loaded profile config "force-systemd-flag-939000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:29.234703    4293 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:29.234749    4293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:33:29.238339    4293 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:33:29.245390    4293 start.go:297] selected driver: qemu2
	I0805 16:33:29.245397    4293 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:33:29.245406    4293 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:33:29.247766    4293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:33:29.250338    4293 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:33:29.253505    4293 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0805 16:33:29.253554    4293 cni.go:84] Creating CNI manager for ""
	I0805 16:33:29.253563    4293 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:33:29.253567    4293 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:33:29.253602    4293 start.go:340] cluster config:
	{Name:docker-flags-290000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:33:29.257379    4293 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:33:29.264371    4293 out.go:177] * Starting "docker-flags-290000" primary control-plane node in "docker-flags-290000" cluster
	I0805 16:33:29.267386    4293 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:33:29.267404    4293 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:33:29.267413    4293 cache.go:56] Caching tarball of preloaded images
	I0805 16:33:29.267471    4293 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:33:29.267476    4293 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:33:29.267534    4293 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/docker-flags-290000/config.json ...
	I0805 16:33:29.267545    4293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/docker-flags-290000/config.json: {Name:mkf8a1d62c69b678f7aeab728783404f7661ea47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:33:29.267762    4293 start.go:360] acquireMachinesLock for docker-flags-290000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:29.267797    4293 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "docker-flags-290000"
	I0805 16:33:29.267808    4293 start.go:93] Provisioning new machine with config: &{Name:docker-flags-290000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:29.267833    4293 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:29.275293    4293 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:29.293267    4293 start.go:159] libmachine.API.Create for "docker-flags-290000" (driver="qemu2")
	I0805 16:33:29.293303    4293 client.go:168] LocalClient.Create starting
	I0805 16:33:29.293368    4293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:29.293398    4293 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:29.293409    4293 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:29.293448    4293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:29.293472    4293 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:29.293480    4293 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:29.293907    4293 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:29.441725    4293 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:29.503741    4293 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:29.503746    4293 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:29.503921    4293 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:29.513272    4293 main.go:141] libmachine: STDOUT: 
	I0805 16:33:29.513287    4293 main.go:141] libmachine: STDERR: 
	I0805 16:33:29.513341    4293 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2 +20000M
	I0805 16:33:29.521311    4293 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:29.521327    4293 main.go:141] libmachine: STDERR: 
	I0805 16:33:29.521336    4293 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:29.521342    4293 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:29.521352    4293 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:29.521381    4293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ad:c4:1b:89:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:29.523042    4293 main.go:141] libmachine: STDOUT: 
	I0805 16:33:29.523057    4293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:29.523076    4293 client.go:171] duration metric: took 229.772458ms to LocalClient.Create
	I0805 16:33:31.525204    4293 start.go:128] duration metric: took 2.257397667s to createHost
	I0805 16:33:31.525249    4293 start.go:83] releasing machines lock for "docker-flags-290000", held for 2.257489083s
	W0805 16:33:31.525314    4293 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:31.540626    4293 out.go:177] * Deleting "docker-flags-290000" in qemu2 ...
	W0805 16:33:31.566594    4293 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:31.566620    4293 start.go:729] Will try again in 5 seconds ...
	I0805 16:33:36.568756    4293 start.go:360] acquireMachinesLock for docker-flags-290000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:36.769970    4293 start.go:364] duration metric: took 201.066708ms to acquireMachinesLock for "docker-flags-290000"
	I0805 16:33:36.770110    4293 start.go:93] Provisioning new machine with config: &{Name:docker-flags-290000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:36.770365    4293 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:36.779091    4293 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:36.826763    4293 start.go:159] libmachine.API.Create for "docker-flags-290000" (driver="qemu2")
	I0805 16:33:36.826810    4293 client.go:168] LocalClient.Create starting
	I0805 16:33:36.826938    4293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:36.827000    4293 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:36.827015    4293 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:36.827075    4293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:36.827121    4293 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:36.827136    4293 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:36.827739    4293 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:36.993638    4293 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:37.070494    4293 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:37.070510    4293 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:37.070812    4293 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:37.080032    4293 main.go:141] libmachine: STDOUT: 
	I0805 16:33:37.080048    4293 main.go:141] libmachine: STDERR: 
	I0805 16:33:37.080102    4293 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2 +20000M
	I0805 16:33:37.087947    4293 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:37.087963    4293 main.go:141] libmachine: STDERR: 
	I0805 16:33:37.087980    4293 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:37.087984    4293 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:37.087994    4293 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:37.088027    4293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:27:9a:41:e9:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/docker-flags-290000/disk.qcow2
	I0805 16:33:37.089699    4293 main.go:141] libmachine: STDOUT: 
	I0805 16:33:37.089715    4293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:37.089729    4293 client.go:171] duration metric: took 262.917166ms to LocalClient.Create
	I0805 16:33:39.091868    4293 start.go:128] duration metric: took 2.321520791s to createHost
	I0805 16:33:39.091918    4293 start.go:83] releasing machines lock for "docker-flags-290000", held for 2.321959708s
	W0805 16:33:39.092278    4293 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-290000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-290000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:39.110079    4293 out.go:177] 
	W0805 16:33:39.117957    4293 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:33:39.117979    4293 out.go:239] * 
	* 
	W0805 16:33:39.121045    4293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:33:39.130909    4293 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-290000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.032417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-290000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-290000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-290000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-290000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-290000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-290000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-290000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.844958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-290000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-290000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-290000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-290000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-290000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-290000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-05 16:33:39.267815 -0700 PDT m=+2809.700485126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-290000 -n docker-flags-290000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-290000 -n docker-flags-290000: exit status 7 (28.447666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-290000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-290000
--- FAIL: TestDockerFlags (10.22s)

                                                
                                    
x
+
TestForceSystemdFlag (10.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-939000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-939000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.011541334s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-939000" primary control-plane node in "force-systemd-flag-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:33:24.233925    4270 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:33:24.234064    4270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:24.234067    4270 out.go:304] Setting ErrFile to fd 2...
	I0805 16:33:24.234069    4270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:24.234203    4270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:33:24.235294    4270 out.go:298] Setting JSON to false
	I0805 16:33:24.251039    4270 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3775,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:33:24.251106    4270 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:33:24.257286    4270 out.go:177] * [force-systemd-flag-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:33:24.264272    4270 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:33:24.264324    4270 notify.go:220] Checking for updates...
	I0805 16:33:24.272204    4270 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:33:24.275201    4270 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:33:24.278245    4270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:33:24.281180    4270 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:33:24.284228    4270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:33:24.287512    4270 config.go:182] Loaded profile config "force-systemd-env-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:24.287602    4270 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:24.287655    4270 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:33:24.292196    4270 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:33:24.299251    4270 start.go:297] selected driver: qemu2
	I0805 16:33:24.299256    4270 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:33:24.299261    4270 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:33:24.301632    4270 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:33:24.304185    4270 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:33:24.307271    4270 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 16:33:24.307305    4270 cni.go:84] Creating CNI manager for ""
	I0805 16:33:24.307315    4270 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:33:24.307319    4270 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:33:24.307365    4270 start.go:340] cluster config:
	{Name:force-systemd-flag-939000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:33:24.311113    4270 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:33:24.317237    4270 out.go:177] * Starting "force-systemd-flag-939000" primary control-plane node in "force-systemd-flag-939000" cluster
	I0805 16:33:24.321222    4270 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:33:24.321237    4270 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:33:24.321246    4270 cache.go:56] Caching tarball of preloaded images
	I0805 16:33:24.321320    4270 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:33:24.321326    4270 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:33:24.321421    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/force-systemd-flag-939000/config.json ...
	I0805 16:33:24.321434    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/force-systemd-flag-939000/config.json: {Name:mke89fed64d35ab176647b7d99104c73f45aa35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:33:24.321671    4270 start.go:360] acquireMachinesLock for force-systemd-flag-939000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:24.321706    4270 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "force-systemd-flag-939000"
	I0805 16:33:24.321720    4270 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:24.321746    4270 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:24.330207    4270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:24.348687    4270 start.go:159] libmachine.API.Create for "force-systemd-flag-939000" (driver="qemu2")
	I0805 16:33:24.348722    4270 client.go:168] LocalClient.Create starting
	I0805 16:33:24.348788    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:24.348825    4270 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:24.348835    4270 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:24.348871    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:24.348897    4270 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:24.348905    4270 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:24.349250    4270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:24.495350    4270 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:24.562328    4270 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:24.562334    4270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:24.562495    4270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:24.571846    4270 main.go:141] libmachine: STDOUT: 
	I0805 16:33:24.571866    4270 main.go:141] libmachine: STDERR: 
	I0805 16:33:24.571929    4270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2 +20000M
	I0805 16:33:24.579792    4270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:24.579805    4270 main.go:141] libmachine: STDERR: 
	I0805 16:33:24.579825    4270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:24.579835    4270 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:24.579855    4270 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:24.579891    4270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:20:38:13:b3:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:24.581489    4270 main.go:141] libmachine: STDOUT: 
	I0805 16:33:24.581509    4270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:24.581530    4270 client.go:171] duration metric: took 232.805292ms to LocalClient.Create
	I0805 16:33:26.583669    4270 start.go:128] duration metric: took 2.261946125s to createHost
	I0805 16:33:26.583766    4270 start.go:83] releasing machines lock for "force-systemd-flag-939000", held for 2.26209475s
	W0805 16:33:26.583825    4270 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:26.608967    4270 out.go:177] * Deleting "force-systemd-flag-939000" in qemu2 ...
	W0805 16:33:26.631148    4270 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:26.631171    4270 start.go:729] Will try again in 5 seconds ...
	I0805 16:33:31.633260    4270 start.go:360] acquireMachinesLock for force-systemd-flag-939000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:31.633726    4270 start.go:364] duration metric: took 354.334µs to acquireMachinesLock for "force-systemd-flag-939000"
	I0805 16:33:31.633853    4270 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:31.634176    4270 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:31.642543    4270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:31.692184    4270 start.go:159] libmachine.API.Create for "force-systemd-flag-939000" (driver="qemu2")
	I0805 16:33:31.692243    4270 client.go:168] LocalClient.Create starting
	I0805 16:33:31.692349    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:31.692414    4270 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:31.692428    4270 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:31.692490    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:31.692543    4270 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:31.692553    4270 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:31.693760    4270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:31.857816    4270 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:32.151000    4270 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:32.151013    4270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:32.151254    4270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:32.160917    4270 main.go:141] libmachine: STDOUT: 
	I0805 16:33:32.160942    4270 main.go:141] libmachine: STDERR: 
	I0805 16:33:32.160995    4270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2 +20000M
	I0805 16:33:32.168952    4270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:32.168963    4270 main.go:141] libmachine: STDERR: 
	I0805 16:33:32.168995    4270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:32.168999    4270 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:32.169010    4270 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:32.169039    4270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:91:f7:66:9f:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-flag-939000/disk.qcow2
	I0805 16:33:32.170676    4270 main.go:141] libmachine: STDOUT: 
	I0805 16:33:32.170690    4270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:32.170706    4270 client.go:171] duration metric: took 478.466708ms to LocalClient.Create
	I0805 16:33:34.172843    4270 start.go:128] duration metric: took 2.538685958s to createHost
	I0805 16:33:34.173005    4270 start.go:83] releasing machines lock for "force-systemd-flag-939000", held for 2.539244375s
	W0805 16:33:34.173439    4270 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:34.185971    4270 out.go:177] 
	W0805 16:33:34.191072    4270 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:33:34.191096    4270 out.go:239] * 
	* 
	W0805 16:33:34.193762    4270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:33:34.204958    4270 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-939000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-939000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-939000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.7605ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-939000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-939000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-939000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-05 16:33:34.299083 -0700 PDT m=+2804.731652168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-939000 -n force-systemd-flag-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-939000 -n force-systemd-flag-939000: exit status 7 (32.152833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-939000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-939000
--- FAIL: TestForceSystemdFlag (10.20s)

                                                
                                    
x
+
TestForceSystemdEnv (10.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-374000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-374000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.252141458s)

                                                
                                                
-- stdout --
	* [force-systemd-env-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-374000" primary control-plane node in "force-systemd-env-374000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:33:18.741677    4236 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:33:18.741805    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:18.741808    4236 out.go:304] Setting ErrFile to fd 2...
	I0805 16:33:18.741810    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:33:18.741960    4236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:33:18.743021    4236 out.go:298] Setting JSON to false
	I0805 16:33:18.759261    4236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3769,"bootTime":1722897029,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:33:18.759336    4236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:33:18.764639    4236 out.go:177] * [force-systemd-env-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:33:18.772612    4236 notify.go:220] Checking for updates...
	I0805 16:33:18.776490    4236 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:33:18.784535    4236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:33:18.792586    4236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:33:18.798499    4236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:33:18.806604    4236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:33:18.814579    4236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0805 16:33:18.818888    4236 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:33:18.818954    4236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:33:18.822488    4236 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:33:18.829589    4236 start.go:297] selected driver: qemu2
	I0805 16:33:18.829595    4236 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:33:18.829600    4236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:33:18.831866    4236 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:33:18.835503    4236 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:33:18.839612    4236 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 16:33:18.839652    4236 cni.go:84] Creating CNI manager for ""
	I0805 16:33:18.839662    4236 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:33:18.839665    4236 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:33:18.839697    4236 start.go:340] cluster config:
	{Name:force-systemd-env-374000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:33:18.843401    4236 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:33:18.850550    4236 out.go:177] * Starting "force-systemd-env-374000" primary control-plane node in "force-systemd-env-374000" cluster
	I0805 16:33:18.854562    4236 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:33:18.854575    4236 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:33:18.854587    4236 cache.go:56] Caching tarball of preloaded images
	I0805 16:33:18.854643    4236 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:33:18.854649    4236 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:33:18.854707    4236 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/force-systemd-env-374000/config.json ...
	I0805 16:33:18.854719    4236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/force-systemd-env-374000/config.json: {Name:mk074f8ab636324e9203877f16e4727502fa2ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:33:18.855046    4236 start.go:360] acquireMachinesLock for force-systemd-env-374000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:18.855081    4236 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "force-systemd-env-374000"
	I0805 16:33:18.855091    4236 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:18.855117    4236 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:18.863569    4236 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:18.881090    4236 start.go:159] libmachine.API.Create for "force-systemd-env-374000" (driver="qemu2")
	I0805 16:33:18.881122    4236 client.go:168] LocalClient.Create starting
	I0805 16:33:18.881186    4236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:18.881220    4236 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:18.881229    4236 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:18.881270    4236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:18.881293    4236 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:18.881303    4236 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:18.881640    4236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:19.030737    4236 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:19.097170    4236 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:19.097178    4236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:19.097355    4236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:19.106948    4236 main.go:141] libmachine: STDOUT: 
	I0805 16:33:19.106979    4236 main.go:141] libmachine: STDERR: 
	I0805 16:33:19.107033    4236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2 +20000M
	I0805 16:33:19.115142    4236 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:19.115157    4236 main.go:141] libmachine: STDERR: 
	I0805 16:33:19.115171    4236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:19.115175    4236 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:19.115200    4236 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:19.115235    4236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:25:d9:c9:d6:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:19.116802    4236 main.go:141] libmachine: STDOUT: 
	I0805 16:33:19.116819    4236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:19.116835    4236 client.go:171] duration metric: took 235.714708ms to LocalClient.Create
	I0805 16:33:21.118893    4236 start.go:128] duration metric: took 2.263812292s to createHost
	I0805 16:33:21.118914    4236 start.go:83] releasing machines lock for "force-systemd-env-374000", held for 2.263874875s
	W0805 16:33:21.118931    4236 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:21.125866    4236 out.go:177] * Deleting "force-systemd-env-374000" in qemu2 ...
	W0805 16:33:21.139271    4236 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:21.139287    4236 start.go:729] Will try again in 5 seconds ...
	I0805 16:33:26.141445    4236 start.go:360] acquireMachinesLock for force-systemd-env-374000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:33:26.583936    4236 start.go:364] duration metric: took 442.398541ms to acquireMachinesLock for "force-systemd-env-374000"
	I0805 16:33:26.584052    4236 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:33:26.584302    4236 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:33:26.597955    4236 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:33:26.648279    4236 start.go:159] libmachine.API.Create for "force-systemd-env-374000" (driver="qemu2")
	I0805 16:33:26.648321    4236 client.go:168] LocalClient.Create starting
	I0805 16:33:26.648448    4236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:33:26.648513    4236 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:26.648533    4236 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:26.648588    4236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:33:26.648636    4236 main.go:141] libmachine: Decoding PEM data...
	I0805 16:33:26.648648    4236 main.go:141] libmachine: Parsing certificate...
	I0805 16:33:26.649237    4236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:33:26.816476    4236 main.go:141] libmachine: Creating SSH key...
	I0805 16:33:26.900504    4236 main.go:141] libmachine: Creating Disk image...
	I0805 16:33:26.900510    4236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:33:26.900712    4236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:26.910310    4236 main.go:141] libmachine: STDOUT: 
	I0805 16:33:26.910335    4236 main.go:141] libmachine: STDERR: 
	I0805 16:33:26.910389    4236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2 +20000M
	I0805 16:33:26.918422    4236 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:33:26.918436    4236 main.go:141] libmachine: STDERR: 
	I0805 16:33:26.918454    4236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:26.918460    4236 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:33:26.918473    4236 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:33:26.918499    4236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9b:86:be:ec:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/force-systemd-env-374000/disk.qcow2
	I0805 16:33:26.920105    4236 main.go:141] libmachine: STDOUT: 
	I0805 16:33:26.920122    4236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:33:26.920135    4236 client.go:171] duration metric: took 271.8115ms to LocalClient.Create
	I0805 16:33:28.922356    4236 start.go:128] duration metric: took 2.33806975s to createHost
	I0805 16:33:28.922405    4236 start.go:83] releasing machines lock for "force-systemd-env-374000", held for 2.338453083s
	W0805 16:33:28.922743    4236 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:33:28.935204    4236 out.go:177] 
	W0805 16:33:28.940267    4236 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:33:28.940288    4236 out.go:239] * 
	* 
	W0805 16:33:28.942825    4236 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:33:28.952196    4236 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-374000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-374000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-374000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.738958ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-374000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-374000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-374000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-05 16:33:29.045463 -0700 PDT m=+2799.477927126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-374000 -n force-systemd-env-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-374000 -n force-systemd-env-374000: exit status 7 (34.686875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-374000
--- FAIL: TestForceSystemdEnv (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-280000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-280000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-vsbf4" [1268f2c7-a36d-4d30-8ab2-edc37b822001] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-vsbf4" [1268f2c7-a36d-4d30-8ab2-edc37b822001] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0040695s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30186
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30186: Get "http://192.168.105.4:30186": dial tcp 192.168.105.4:30186: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-280000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-vsbf4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-280000/192.168.105.4
Start Time:       Mon, 05 Aug 2024 15:57:59 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://4fcf0f05f698fbf8e26923c8a02069686e016d66ca3e7724a0007396a2d9156b
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 05 Aug 2024 15:58:19 -0700
Finished:     Mon, 05 Aug 2024 15:58:19 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvm8f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tvm8f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-vsbf4 to functional-280000
Normal   Pulling    32s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     29s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 2.923s (2.971s including waiting). Image size: 84957542 bytes.
Normal   Created    13s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 29s)  kubelet            Started container echoserver-arm
Normal   Pulled     13s (x2 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    2s (x4 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-vsbf4_default(1268f2c7-a36d-4d30-8ab2-edc37b822001)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-280000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-280000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.41.229
IPs:                      10.100.41.229
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30186/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-280000 -n functional-280000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-280000 ssh -- ls                                                                                          | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh cat                                                                                            | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | /mount-9p/test-1722898703070171000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh stat                                                                                           | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh stat                                                                                           | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh sudo                                                                                           | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3431790094/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh -- ls                                                                                          | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh sudo                                                                                           | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT | 05 Aug 24 15:58 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-280000 --dry-run                                                                                       | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-280000 | jenkins | v1.33.1 | 05 Aug 24 15:58 PDT |                     |
	|           | -p functional-280000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:58:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:58:31.560366    2221 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:58:31.560483    2221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.560486    2221 out.go:304] Setting ErrFile to fd 2...
	I0805 15:58:31.560489    2221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.560633    2221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 15:58:31.561717    2221 out.go:298] Setting JSON to false
	I0805 15:58:31.578289    2221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1682,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:58:31.578363    2221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:58:31.581747    2221 out.go:177] * [functional-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:58:31.588715    2221 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 15:58:31.588725    2221 notify.go:220] Checking for updates...
	I0805 15:58:31.595719    2221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:58:31.598718    2221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:58:31.601689    2221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:58:31.604650    2221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 15:58:31.607688    2221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 15:58:31.611002    2221 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 15:58:31.611246    2221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:58:31.615660    2221 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 15:58:31.622697    2221 start.go:297] selected driver: qemu2
	I0805 15:58:31.622703    2221 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:58:31.622751    2221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 15:58:31.624807    2221 cni.go:84] Creating CNI manager for ""
	I0805 15:58:31.624822    2221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 15:58:31.624862    2221 start.go:340] cluster config:
	{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-280000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:58:31.636761    2221 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 05 22:58:24 functional-280000 dockerd[5845]: time="2024-08-05T22:58:24.256829574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 22:58:24 functional-280000 dockerd[5845]: time="2024-08-05T22:58:24.256843199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:24 functional-280000 dockerd[5845]: time="2024-08-05T22:58:24.256904072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:24 functional-280000 cri-dockerd[6112]: time="2024-08-05T22:58:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09572ce2eb8f9bf6dd02e66f0d71a9ead937accca1388dc2de2e1e2ebbd6e7ae/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 22:58:25 functional-280000 cri-dockerd[6112]: time="2024-08-05T22:58:25Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.448988817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.449024358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.449242854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.449326436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:25 functional-280000 dockerd[5839]: time="2024-08-05T22:58:25.481787532Z" level=info msg="ignoring event" container=4326692a7fcaa9e48a6f87c0e874fbdbecfe2c18b2ce6225c0a5a143bfbb67fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.481865656Z" level=info msg="shim disconnected" id=4326692a7fcaa9e48a6f87c0e874fbdbecfe2c18b2ce6225c0a5a143bfbb67fc namespace=moby
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.481889405Z" level=warning msg="cleaning up after shim disconnected" id=4326692a7fcaa9e48a6f87c0e874fbdbecfe2c18b2ce6225c0a5a143bfbb67fc namespace=moby
	Aug 05 22:58:25 functional-280000 dockerd[5845]: time="2024-08-05T22:58:25.481893238Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 22:58:26 functional-280000 dockerd[5845]: time="2024-08-05T22:58:26.896059632Z" level=info msg="shim disconnected" id=09572ce2eb8f9bf6dd02e66f0d71a9ead937accca1388dc2de2e1e2ebbd6e7ae namespace=moby
	Aug 05 22:58:26 functional-280000 dockerd[5839]: time="2024-08-05T22:58:26.896094465Z" level=info msg="ignoring event" container=09572ce2eb8f9bf6dd02e66f0d71a9ead937accca1388dc2de2e1e2ebbd6e7ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 22:58:26 functional-280000 dockerd[5845]: time="2024-08-05T22:58:26.896498665Z" level=warning msg="cleaning up after shim disconnected" id=09572ce2eb8f9bf6dd02e66f0d71a9ead937accca1388dc2de2e1e2ebbd6e7ae namespace=moby
	Aug 05 22:58:26 functional-280000 dockerd[5845]: time="2024-08-05T22:58:26.896508332Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.432514335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.432640291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.432653874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.432721248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 22:58:27 functional-280000 dockerd[5839]: time="2024-08-05T22:58:27.454344314Z" level=info msg="ignoring event" container=a3c39494be87e4eee47433a0a3ac8955d623986bda8693d672f6882eead3eeab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.454449437Z" level=info msg="shim disconnected" id=a3c39494be87e4eee47433a0a3ac8955d623986bda8693d672f6882eead3eeab namespace=moby
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.454479145Z" level=warning msg="cleaning up after shim disconnected" id=a3c39494be87e4eee47433a0a3ac8955d623986bda8693d672f6882eead3eeab namespace=moby
	Aug 05 22:58:27 functional-280000 dockerd[5845]: time="2024-08-05T22:58:27.454483437Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3c39494be87e       72565bf5bbedf                                                                                         6 seconds ago        Exited              echoserver-arm            2                   3d539b4489145       hello-node-65f5d5cc78-gvtp2
	4326692a7fcaa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   09572ce2eb8f9       busybox-mount
	4fcf0f05f698f       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   83200fd055100       hello-node-connect-6f49f58cd5-vsbf4
	a1381dfbec8f5       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         26 seconds ago       Running             myfrontend                0                   bcca2c45ef901       sp-pod
	ce16c5cf545b4       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         40 seconds ago       Running             nginx                     0                   34a05aae3d26b       nginx-svc
	aa18bdaf10bac       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   5e723fd51232d       coredns-7db6d8ff4d-t8qql
	61d7fe6df9ffb       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   3e927947a1739       storage-provisioner
	b2a3e3a0dd191       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   f08d1e54ef711       kube-proxy-g5l7x
	e59974cda5408       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   18625facfa273       kube-controller-manager-functional-280000
	4d5e855c233db       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   4ba57029be0bb       etcd-functional-280000
	fe6179c64b910       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   5feb9506c2c95       kube-scheduler-functional-280000
	97209e779f5d8       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   dd7403c8be7de       kube-apiserver-functional-280000
	28405778fba72       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   c7586c7dda994       storage-provisioner
	09f18bb7cdb1c       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   b878bf2bb192c       coredns-7db6d8ff4d-t8qql
	50ef24dcd52c0       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   273a74f83b67f       kube-proxy-g5l7x
	a60927a63a797       d48f992a22722                                                                                         About a minute ago   Exited              kube-scheduler            1                   e43dcace253f2       kube-scheduler-functional-280000
	3e7c42e3850ac       014faa467e297                                                                                         About a minute ago   Exited              etcd                      1                   0b76e17b9b427       etcd-functional-280000
	3e1436ec78b4f       8e97cdb19e7cc                                                                                         About a minute ago   Exited              kube-controller-manager   1                   f033894f27fec       kube-controller-manager-functional-280000
	
	
	==> coredns [09f18bb7cdb1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60839 - 9607 "HINFO IN 3553546678146381173.481399660180709220. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028343033s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa18bdaf10ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52220 - 26451 "HINFO IN 6441991472054913693.7481500949985222514. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055023512s
	[INFO] 10.244.0.1:8116 - 18465 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098748s
	[INFO] 10.244.0.1:37267 - 37081 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000064332s
	[INFO] 10.244.0.1:59066 - 40000 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029958s
	[INFO] 10.244.0.1:43257 - 29138 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001139436s
	[INFO] 10.244.0.1:47315 - 50083 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000063082s
	[INFO] 10.244.0.1:53094 - 28037 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000090624s
	
	
	==> describe nodes <==
	Name:               functional-280000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-280000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=functional-280000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T15_56_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 22:56:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-280000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 22:58:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 22:58:26 +0000   Mon, 05 Aug 2024 22:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 22:58:26 +0000   Mon, 05 Aug 2024 22:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 22:58:26 +0000   Mon, 05 Aug 2024 22:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 22:58:26 +0000   Mon, 05 Aug 2024 22:56:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-280000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 646492ce72ee42d99c6f656de2feec76
	  System UUID:                646492ce72ee42d99c6f656de2feec76
	  Boot ID:                    9a0cd302-d6ce-4d69-b869-90e354bbfb94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-gvtp2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     hello-node-connect-6f49f58cd5-vsbf4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 coredns-7db6d8ff4d-t8qql                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m8s
	  kube-system                 etcd-functional-280000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-apiserver-functional-280000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-functional-280000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-proxy-g5l7x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-scheduler-functional-280000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-hk9ct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-4n5dr        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 66s                    kube-proxy       
	  Normal  Starting                 107s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x7 over 2m26s)  kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m22s                  kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s                  kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s                  kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m18s                  kubelet          Node functional-280000 status is now: NodeReady
	  Normal  RegisteredNode           2m9s                   node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	  Normal  Starting                 112s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)    kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)    kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)    kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 70s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)      kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                    node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	
	
	==> dmesg <==
	[  +0.038158] kauditd_printk_skb: 140 callbacks suppressed
	[ +14.786132] kauditd_printk_skb: 96 callbacks suppressed
	[  +2.164159] systemd-fstab-generator[4937]: Ignoring "noauto" option for root device
	[Aug 5 22:57] systemd-fstab-generator[5369]: Ignoring "noauto" option for root device
	[  +0.053476] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.118487] systemd-fstab-generator[5403]: Ignoring "noauto" option for root device
	[  +0.105776] systemd-fstab-generator[5415]: Ignoring "noauto" option for root device
	[  +0.115665] systemd-fstab-generator[5429]: Ignoring "noauto" option for root device
	[  +5.099379] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.334105] systemd-fstab-generator[6061]: Ignoring "noauto" option for root device
	[  +0.086433] systemd-fstab-generator[6073]: Ignoring "noauto" option for root device
	[  +0.089388] systemd-fstab-generator[6085]: Ignoring "noauto" option for root device
	[  +0.099972] systemd-fstab-generator[6100]: Ignoring "noauto" option for root device
	[  +0.224198] systemd-fstab-generator[6273]: Ignoring "noauto" option for root device
	[  +0.907004] systemd-fstab-generator[6397]: Ignoring "noauto" option for root device
	[  +3.404664] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.603521] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.046867] systemd-fstab-generator[7397]: Ignoring "noauto" option for root device
	[  +4.650297] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.230959] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.323056] kauditd_printk_skb: 15 callbacks suppressed
	[Aug 5 22:58] kauditd_printk_skb: 23 callbacks suppressed
	[ +11.735367] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.347944] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.950033] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [3e7c42e3850a] <==
	{"level":"info","ts":"2024-08-05T22:56:42.822394Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T22:56:44.061084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T22:56:44.061269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T22:56:44.061312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-05T22:56:44.061345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T22:56:44.061413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-05T22:56:44.061529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T22:56:44.061614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-05T22:56:44.066535Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:56:44.066548Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-280000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T22:56:44.067257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:56:44.067753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T22:56:44.067794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T22:56:44.070624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T22:56:44.074693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-05T22:57:09.53611Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T22:57:09.536144Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-280000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-05T22:57:09.536184Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T22:57:09.536225Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T22:57:09.544561Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T22:57:09.544582Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T22:57:09.544602Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-05T22:57:09.546016Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-05T22:57:09.546047Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-05T22:57:09.546052Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-280000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [4d5e855c233d] <==
	{"level":"info","ts":"2024-08-05T22:57:24.169863Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T22:57:24.169898Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T22:57:24.171575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-05T22:57:24.171618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-05T22:57:24.171677Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:57:24.171707Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:57:24.173389Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T22:57:24.175498Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-05T22:57:24.175592Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-05T22:57:24.176156Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T22:57:24.176181Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T22:57:25.232832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-05T22:57:25.232981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-05T22:57:25.233064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-05T22:57:25.233125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-05T22:57:25.233142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-05T22:57:25.233167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-05T22:57:25.233192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-05T22:57:25.237327Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-280000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T22:57:25.237386Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:57:25.23761Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T22:57:25.237635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T22:57:25.237667Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:57:25.240758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T22:57:25.240796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 22:58:33 up 2 min,  0 users,  load average: 0.72, 0.49, 0.20
	Linux functional-280000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [97209e779f5d] <==
	I0805 22:57:25.873568       1 cache.go:39] Caches are synced for autoregister controller
	I0805 22:57:25.873688       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 22:57:25.873692       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 22:57:25.873456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 22:57:25.873880       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 22:57:25.873480       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 22:57:25.876290       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 22:57:25.925579       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 22:57:26.777856       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 22:57:26.879802       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0805 22:57:26.880341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 22:57:26.881998       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 22:57:27.074828       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 22:57:27.078576       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 22:57:27.089818       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 22:57:27.098759       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 22:57:27.100684       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 22:57:44.981467       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.189.125"}
	I0805 22:57:50.212773       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.36.225"}
	I0805 22:57:59.571322       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 22:57:59.616747       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.41.229"}
	I0805 22:58:14.915985       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.98.26"}
	I0805 22:58:32.106558       1 controller.go:615] quota admission added evaluator for: namespaces
	I0805 22:58:32.213021       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.16.113"}
	I0805 22:58:32.224637       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.160.189"}
	
	
	==> kube-controller-manager [3e1436ec78b4] <==
	I0805 22:56:56.749222       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0805 22:56:56.750315       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 22:56:56.750328       1 shared_informer.go:320] Caches are synced for PV protection
	I0805 22:56:56.751581       1 shared_informer.go:320] Caches are synced for job
	I0805 22:56:56.752722       1 shared_informer.go:320] Caches are synced for expand
	I0805 22:56:56.752730       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 22:56:56.753805       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0805 22:56:56.758580       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 22:56:56.759642       1 shared_informer.go:320] Caches are synced for service account
	I0805 22:56:56.770311       1 shared_informer.go:320] Caches are synced for HPA
	I0805 22:56:56.772467       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 22:56:56.772579       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 22:56:56.822817       1 shared_informer.go:320] Caches are synced for cronjob
	I0805 22:56:56.823974       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0805 22:56:56.828365       1 shared_informer.go:320] Caches are synced for deployment
	I0805 22:56:56.831571       1 shared_informer.go:320] Caches are synced for disruption
	I0805 22:56:56.871965       1 shared_informer.go:320] Caches are synced for crt configmap
	I0805 22:56:56.922911       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 22:56:56.935129       1 shared_informer.go:320] Caches are synced for persistent volume
	I0805 22:56:56.951142       1 shared_informer.go:320] Caches are synced for daemon sets
	I0805 22:56:56.967949       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 22:56:56.974770       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 22:56:57.385465       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 22:56:57.428181       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 22:56:57.428227       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e59974cda540] <==
	I0805 22:58:14.885351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="10.415417ms"
	I0805 22:58:14.890830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="5.414933ms"
	I0805 22:58:14.890862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.25µs"
	I0805 22:58:14.893562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.542µs"
	I0805 22:58:15.753542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="30.666µs"
	I0805 22:58:16.762590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.832µs"
	I0805 22:58:19.788228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.958µs"
	I0805 22:58:27.402610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="26.499µs"
	I0805 22:58:27.842167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24µs"
	I0805 22:58:30.398337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="30.625µs"
	I0805 22:58:32.142465       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.741622ms"
	E0805 22:58:32.142485       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 22:58:32.142593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.240238ms"
	E0805 22:58:32.142614       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 22:58:32.146223       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.722349ms"
	E0805 22:58:32.146243       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 22:58:32.147100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.498978ms"
	E0805 22:58:32.147113       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 22:58:32.158610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.869448ms"
	I0805 22:58:32.158829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.131466ms"
	I0805 22:58:32.203188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="44.548656ms"
	I0805 22:58:32.203320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="30.583µs"
	I0805 22:58:32.203371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.749µs"
	I0805 22:58:32.206904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="48.059135ms"
	I0805 22:58:32.207030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="27.542µs"
	
	
	==> kube-proxy [50ef24dcd52c] <==
	I0805 22:56:45.531456       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:56:45.536368       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0805 22:56:45.547632       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 22:56:45.547652       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 22:56:45.547661       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:56:45.548289       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:56:45.548368       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:56:45.548373       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:56:45.548727       1 config.go:192] "Starting service config controller"
	I0805 22:56:45.548733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:56:45.548742       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:56:45.548744       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:56:45.548927       1 config.go:319] "Starting node config controller"
	I0805 22:56:45.548929       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:56:45.648772       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:56:45.648832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 22:56:45.649006       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b2a3e3a0dd19] <==
	I0805 22:57:26.911929       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:57:26.916785       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0805 22:57:26.924356       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 22:57:26.924383       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 22:57:26.924389       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:57:26.925003       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:57:26.925068       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:57:26.925076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:57:26.925564       1 config.go:192] "Starting service config controller"
	I0805 22:57:26.925568       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:57:26.925577       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:57:26.925579       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:57:26.925688       1 config.go:319] "Starting node config controller"
	I0805 22:57:26.925690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:57:27.026385       1 shared_informer.go:320] Caches are synced for node config
	I0805 22:57:27.026404       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:57:27.026421       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a60927a63a79] <==
	I0805 22:56:43.166077       1 serving.go:380] Generated self-signed cert in-memory
	W0805 22:56:44.613529       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 22:56:44.613646       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:56:44.613667       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 22:56:44.613685       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 22:56:44.640119       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 22:56:44.640218       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:56:44.640995       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 22:56:44.641039       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 22:56:44.643503       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 22:56:44.641046       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 22:56:44.744048       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 22:57:09.529243       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0805 22:57:09.529404       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0805 22:57:09.529464       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fe6179c64b91] <==
	I0805 22:57:24.334517       1 serving.go:380] Generated self-signed cert in-memory
	W0805 22:57:25.807890       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 22:57:25.807929       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:57:25.807939       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 22:57:25.807946       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 22:57:25.832106       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 22:57:25.832298       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:57:25.833032       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 22:57:25.835502       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 22:57:25.835528       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 22:57:25.836222       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 22:57:25.936813       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.078607    6404 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5f8d0f0a-86db-4387-9898-7c981912b13e-test-volume\") on node \"functional-280000\" DevicePath \"\""
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.078621    6404 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zlxfn\" (UniqueName: \"kubernetes.io/projected/5f8d0f0a-86db-4387-9898-7c981912b13e-kube-api-access-zlxfn\") on node \"functional-280000\" DevicePath \"\""
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.394305    6404 scope.go:117] "RemoveContainer" containerID="f9a5ca4dcc6a2ec2222223bfb498adf59c4fab73ec2c044cd20df2307bb92c89"
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.830913    6404 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09572ce2eb8f9bf6dd02e66f0d71a9ead937accca1388dc2de2e1e2ebbd6e7ae"
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.837168    6404 scope.go:117] "RemoveContainer" containerID="f9a5ca4dcc6a2ec2222223bfb498adf59c4fab73ec2c044cd20df2307bb92c89"
	Aug 05 22:58:27 functional-280000 kubelet[6404]: I0805 22:58:27.837327    6404 scope.go:117] "RemoveContainer" containerID="a3c39494be87e4eee47433a0a3ac8955d623986bda8693d672f6882eead3eeab"
	Aug 05 22:58:27 functional-280000 kubelet[6404]: E0805 22:58:27.837406    6404 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-gvtp2_default(fe58daea-77a5-4fb3-a6b2-9c981077fa30)\"" pod="default/hello-node-65f5d5cc78-gvtp2" podUID="fe58daea-77a5-4fb3-a6b2-9c981077fa30"
	Aug 05 22:58:30 functional-280000 kubelet[6404]: I0805 22:58:30.394325    6404 scope.go:117] "RemoveContainer" containerID="4fcf0f05f698fbf8e26923c8a02069686e016d66ca3e7724a0007396a2d9156b"
	Aug 05 22:58:30 functional-280000 kubelet[6404]: E0805 22:58:30.394564    6404 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-vsbf4_default(1268f2c7-a36d-4d30-8ab2-edc37b822001)\"" pod="default/hello-node-connect-6f49f58cd5-vsbf4" podUID="1268f2c7-a36d-4d30-8ab2-edc37b822001"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.156561    6404 topology_manager.go:215] "Topology Admit Handler" podUID="305445e8-65dc-42e9-a335-fbfba43cfead" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-hk9ct"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: E0805 22:58:32.156599    6404 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f8d0f0a-86db-4387-9898-7c981912b13e" containerName="mount-munger"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.156617    6404 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8d0f0a-86db-4387-9898-7c981912b13e" containerName="mount-munger"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: W0805 22:58:32.163040    6404 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-280000" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-280000' and this object
	Aug 05 22:58:32 functional-280000 kubelet[6404]: E0805 22:58:32.163060    6404 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-280000" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-280000' and this object
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.163073    6404 topology_manager.go:215] "Topology Admit Handler" podUID="86b7fcde-5c17-4eff-940a-771835b66528" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-4n5dr"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.207974    6404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxz2g\" (UniqueName: \"kubernetes.io/projected/86b7fcde-5c17-4eff-940a-771835b66528-kube-api-access-wxz2g\") pod \"kubernetes-dashboard-779776cb65-4n5dr\" (UID: \"86b7fcde-5c17-4eff-940a-771835b66528\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-4n5dr"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.207999    6404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/305445e8-65dc-42e9-a335-fbfba43cfead-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-hk9ct\" (UID: \"305445e8-65dc-42e9-a335-fbfba43cfead\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hk9ct"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.208047    6404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lkg\" (UniqueName: \"kubernetes.io/projected/305445e8-65dc-42e9-a335-fbfba43cfead-kube-api-access-v7lkg\") pod \"dashboard-metrics-scraper-b5fc48f67-hk9ct\" (UID: \"305445e8-65dc-42e9-a335-fbfba43cfead\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hk9ct"
	Aug 05 22:58:32 functional-280000 kubelet[6404]: I0805 22:58:32.208060    6404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/86b7fcde-5c17-4eff-940a-771835b66528-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-4n5dr\" (UID: \"86b7fcde-5c17-4eff-940a-771835b66528\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-4n5dr"
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312324    6404 projected.go:294] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312349    6404 projected.go:200] Error preparing data for projected volume kube-api-access-v7lkg for pod kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hk9ct: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312400    6404 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305445e8-65dc-42e9-a335-fbfba43cfead-kube-api-access-v7lkg podName:305445e8-65dc-42e9-a335-fbfba43cfead nodeName:}" failed. No retries permitted until 2024-08-05 22:58:33.81238493 +0000 UTC m=+70.486651753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7lkg" (UniqueName: "kubernetes.io/projected/305445e8-65dc-42e9-a335-fbfba43cfead-kube-api-access-v7lkg") pod "dashboard-metrics-scraper-b5fc48f67-hk9ct" (UID: "305445e8-65dc-42e9-a335-fbfba43cfead") : failed to sync configmap cache: timed out waiting for the condition
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312850    6404 projected.go:294] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312864    6404 projected.go:200] Error preparing data for projected volume kube-api-access-wxz2g for pod kubernetes-dashboard/kubernetes-dashboard-779776cb65-4n5dr: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 22:58:33 functional-280000 kubelet[6404]: E0805 22:58:33.312879    6404 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86b7fcde-5c17-4eff-940a-771835b66528-kube-api-access-wxz2g podName:86b7fcde-5c17-4eff-940a-771835b66528 nodeName:}" failed. No retries permitted until 2024-08-05 22:58:33.812874004 +0000 UTC m=+70.487140786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wxz2g" (UniqueName: "kubernetes.io/projected/86b7fcde-5c17-4eff-940a-771835b66528-kube-api-access-wxz2g") pod "kubernetes-dashboard-779776cb65-4n5dr" (UID: "86b7fcde-5c17-4eff-940a-771835b66528") : failed to sync configmap cache: timed out waiting for the condition
	
	
	==> storage-provisioner [28405778fba7] <==
	I0805 22:56:57.091679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:56:57.098649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:56:57.098675       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:56:57.101219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:56:57.101315       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-280000_9fc87642-c44f-4219-8379-d4fc97169561!
	I0805 22:56:57.101425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a190a9d-c596-4294-a79d-ff835f1d93a1", APIVersion:"v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-280000_9fc87642-c44f-4219-8379-d4fc97169561 became leader
	I0805 22:56:57.201749       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-280000_9fc87642-c44f-4219-8379-d4fc97169561!
	
	
	==> storage-provisioner [61d7fe6df9ff] <==
	I0805 22:57:26.900661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:57:26.908437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:57:26.908456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:57:44.295983       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:57:44.296053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-280000_993602a3-412c-4699-9848-0d130be1c219!
	I0805 22:57:44.296389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a190a9d-c596-4294-a79d-ff835f1d93a1", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-280000_993602a3-412c-4699-9848-0d130be1c219 became leader
	I0805 22:57:44.396543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-280000_993602a3-412c-4699-9848-0d130be1c219!
	I0805 22:57:55.081685       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0805 22:57:55.081729       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3832e428-e93d-4eec-ba2a-7b018a276754 367 0 2024-08-05 22:56:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-05 22:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ff9c3572-c24e-468e-a4d9-50a1fafbc3a0 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ff9c3572-c24e-468e-a4d9-50a1fafbc3a0 685 0 2024-08-05 22:57:55 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-05 22:57:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-05 22:57:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0805 22:57:55.082137       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ff9c3572-c24e-468e-a4d9-50a1fafbc3a0", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0805 22:57:55.082275       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ff9c3572-c24e-468e-a4d9-50a1fafbc3a0" provisioned
	I0805 22:57:55.082589       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0805 22:57:55.082598       1 volume_store.go:212] Trying to save persistentvolume "pvc-ff9c3572-c24e-468e-a4d9-50a1fafbc3a0"
	I0805 22:57:55.088512       1 volume_store.go:219] persistentvolume "pvc-ff9c3572-c24e-468e-a4d9-50a1fafbc3a0" saved
	I0805 22:57:55.088800       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ff9c3572-c24e-468e-a4d9-50a1fafbc3a0", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ff9c3572-c24e-468e-a4d9-50a1fafbc3a0
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-280000 -n functional-280000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-280000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-hk9ct kubernetes-dashboard-779776cb65-4n5dr
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-280000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hk9ct kubernetes-dashboard-779776cb65-4n5dr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-280000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hk9ct kubernetes-dashboard-779776cb65-4n5dr: exit status 1 (41.935833ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-280000/192.168.105.4
	Start Time:       Mon, 05 Aug 2024 15:58:23 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://4326692a7fcaa9e48a6f87c0e874fbdbecfe2c18b2ce6225c0a5a143bfbb67fc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 05 Aug 2024 15:58:25 -0700
	      Finished:     Mon, 05 Aug 2024 15:58:25 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlxfn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zlxfn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-280000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.101s (1.101s including waiting). Image size: 3547125 bytes.
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-hk9ct" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-4n5dr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-280000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hk9ct kubernetes-dashboard-779776cb65-4n5dr: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 node stop m02 -v=7 --alsologtostderr
E0805 16:03:09.394772    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-949000 node stop m02 -v=7 --alsologtostderr: (12.185760083s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
E0805 16:03:29.876720    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:04:10.838254    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:05:32.759038    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:06:06.647219    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr: exit status 7 (3m45.049202792s)

                                                
                                                
-- stdout --
	ha-949000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-949000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-949000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:03:21.273006    2922 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:03:21.273176    2922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:03:21.273183    2922 out.go:304] Setting ErrFile to fd 2...
	I0805 16:03:21.273186    2922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:03:21.273301    2922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:03:21.273406    2922 out.go:298] Setting JSON to false
	I0805 16:03:21.273421    2922 mustload.go:65] Loading cluster: ha-949000
	I0805 16:03:21.273494    2922 notify.go:220] Checking for updates...
	I0805 16:03:21.273636    2922 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:03:21.273644    2922 status.go:255] checking status of ha-949000 ...
	I0805 16:03:21.274444    2922 status.go:330] ha-949000 host status = "Running" (err=<nil>)
	I0805 16:03:21.274455    2922 host.go:66] Checking if "ha-949000" exists ...
	I0805 16:03:21.274549    2922 host.go:66] Checking if "ha-949000" exists ...
	I0805 16:03:21.274664    2922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:03:21.274673    2922 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/id_rsa Username:docker}
	W0805 16:04:36.275916    2922 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0805 16:04:36.276011    2922 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0805 16:04:36.276021    2922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0805 16:04:36.276025    2922 status.go:257] ha-949000 status: &{Name:ha-949000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:04:36.276035    2922 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0805 16:04:36.276040    2922 status.go:255] checking status of ha-949000-m02 ...
	I0805 16:04:36.276270    2922 status.go:330] ha-949000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:04:36.276278    2922 status.go:343] host is not running, skipping remaining checks
	I0805 16:04:36.276280    2922 status.go:257] ha-949000-m02 status: &{Name:ha-949000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:04:36.276284    2922 status.go:255] checking status of ha-949000-m03 ...
	I0805 16:04:36.276890    2922 status.go:330] ha-949000-m03 host status = "Running" (err=<nil>)
	I0805 16:04:36.276895    2922 host.go:66] Checking if "ha-949000-m03" exists ...
	I0805 16:04:36.276992    2922 host.go:66] Checking if "ha-949000-m03" exists ...
	I0805 16:04:36.277113    2922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:04:36.277119    2922 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m03/id_rsa Username:docker}
	W0805 16:05:51.278870    2922 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0805 16:05:51.278913    2922 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0805 16:05:51.278920    2922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0805 16:05:51.278924    2922 status.go:257] ha-949000-m03 status: &{Name:ha-949000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:05:51.278933    2922 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0805 16:05:51.278936    2922 status.go:255] checking status of ha-949000-m04 ...
	I0805 16:05:51.279589    2922 status.go:330] ha-949000-m04 host status = "Running" (err=<nil>)
	I0805 16:05:51.279597    2922 host.go:66] Checking if "ha-949000-m04" exists ...
	I0805 16:05:51.279712    2922 host.go:66] Checking if "ha-949000-m04" exists ...
	I0805 16:05:51.279836    2922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:05:51.279846    2922 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m04/id_rsa Username:docker}
	W0805 16:07:06.281669    2922 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0805 16:07:06.281866    2922 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0805 16:07:06.281907    2922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0805 16:07:06.281925    2922 status.go:257] ha-949000-m04 status: &{Name:ha-949000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:07:06.281969    2922 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr": ha-949000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-949000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr": ha-949000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-949000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr": ha-949000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-949000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-949000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
E0805 16:07:48.892891    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:08:16.598856    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 3 (1m15.074186917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:08:21.357517    2944 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0805 16:08:21.357556    2944 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.091386708s)
ha_test.go:413: expected profile "ha-949000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-949000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-949000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-949000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
E0805 16:11:06.642355    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 3 (1m15.039123083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:12:06.484570    2974 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0805 16:12:06.484627    2974 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.124432875s)

                                                
                                                
-- stdout --
	* Starting "ha-949000-m02" control-plane node in "ha-949000" cluster
	* Restarting existing qemu2 VM for "ha-949000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-949000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:12:06.548920    2979 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:12:06.549233    2979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:06.549237    2979 out.go:304] Setting ErrFile to fd 2...
	I0805 16:12:06.549241    2979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:06.549410    2979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:12:06.549704    2979 mustload.go:65] Loading cluster: ha-949000
	I0805 16:12:06.549983    2979 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0805 16:12:06.550274    2979 host.go:58] "ha-949000-m02" host status: Stopped
	I0805 16:12:06.554979    2979 out.go:177] * Starting "ha-949000-m02" control-plane node in "ha-949000" cluster
	I0805 16:12:06.557993    2979 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:12:06.558009    2979 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:12:06.558021    2979 cache.go:56] Caching tarball of preloaded images
	I0805 16:12:06.558114    2979 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:12:06.558120    2979 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:12:06.558196    2979 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/ha-949000/config.json ...
	I0805 16:12:06.558645    2979 start.go:360] acquireMachinesLock for ha-949000-m02: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:12:06.558695    2979 start.go:364] duration metric: took 35.208µs to acquireMachinesLock for "ha-949000-m02"
	I0805 16:12:06.558705    2979 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:12:06.558715    2979 fix.go:54] fixHost starting: m02
	I0805 16:12:06.558871    2979 fix.go:112] recreateIfNeeded on ha-949000-m02: state=Stopped err=<nil>
	W0805 16:12:06.558878    2979 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:12:06.562867    2979 out.go:177] * Restarting existing qemu2 VM for "ha-949000-m02" ...
	I0805 16:12:06.565976    2979 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:12:06.566029    2979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:a9:e7:48:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/disk.qcow2
	I0805 16:12:06.568653    2979 main.go:141] libmachine: STDOUT: 
	I0805 16:12:06.568686    2979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:12:06.568718    2979 fix.go:56] duration metric: took 10.003666ms for fixHost
	I0805 16:12:06.568723    2979 start.go:83] releasing machines lock for "ha-949000-m02", held for 10.023584ms
	W0805 16:12:06.568731    2979 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:12:06.568763    2979 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:12:06.568769    2979 start.go:729] Will try again in 5 seconds ...
	I0805 16:12:11.570998    2979 start.go:360] acquireMachinesLock for ha-949000-m02: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:12:11.571495    2979 start.go:364] duration metric: took 381.958µs to acquireMachinesLock for "ha-949000-m02"
	I0805 16:12:11.571627    2979 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:12:11.571640    2979 fix.go:54] fixHost starting: m02
	I0805 16:12:11.572162    2979 fix.go:112] recreateIfNeeded on ha-949000-m02: state=Stopped err=<nil>
	W0805 16:12:11.572178    2979 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:12:11.576319    2979 out.go:177] * Restarting existing qemu2 VM for "ha-949000-m02" ...
	I0805 16:12:11.580174    2979 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:12:11.580497    2979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:a9:e7:48:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/disk.qcow2
	I0805 16:12:11.587068    2979 main.go:141] libmachine: STDOUT: 
	I0805 16:12:11.587118    2979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:12:11.587219    2979 fix.go:56] duration metric: took 15.5785ms for fixHost
	I0805 16:12:11.587236    2979 start.go:83] releasing machines lock for "ha-949000-m02", held for 15.724875ms
	W0805 16:12:11.587393    2979 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:12:11.592282    2979 out.go:177] 
	W0805 16:12:11.596379    2979 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:12:11.596397    2979 out.go:239] * 
	* 
	W0805 16:12:11.602468    2979 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:12:11.607261    2979 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0805 16:12:06.548920    2979 out.go:291] Setting OutFile to fd 1 ...
I0805 16:12:06.549233    2979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:12:06.549237    2979 out.go:304] Setting ErrFile to fd 2...
I0805 16:12:06.549241    2979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:12:06.549410    2979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 16:12:06.549704    2979 mustload.go:65] Loading cluster: ha-949000
I0805 16:12:06.549983    2979 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0805 16:12:06.550274    2979 host.go:58] "ha-949000-m02" host status: Stopped
I0805 16:12:06.554979    2979 out.go:177] * Starting "ha-949000-m02" control-plane node in "ha-949000" cluster
I0805 16:12:06.557993    2979 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 16:12:06.558009    2979 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 16:12:06.558021    2979 cache.go:56] Caching tarball of preloaded images
I0805 16:12:06.558114    2979 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 16:12:06.558120    2979 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 16:12:06.558196    2979 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/ha-949000/config.json ...
I0805 16:12:06.558645    2979 start.go:360] acquireMachinesLock for ha-949000-m02: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 16:12:06.558695    2979 start.go:364] duration metric: took 35.208µs to acquireMachinesLock for "ha-949000-m02"
I0805 16:12:06.558705    2979 start.go:96] Skipping create...Using existing machine configuration
I0805 16:12:06.558715    2979 fix.go:54] fixHost starting: m02
I0805 16:12:06.558871    2979 fix.go:112] recreateIfNeeded on ha-949000-m02: state=Stopped err=<nil>
W0805 16:12:06.558878    2979 fix.go:138] unexpected machine state, will restart: <nil>
I0805 16:12:06.562867    2979 out.go:177] * Restarting existing qemu2 VM for "ha-949000-m02" ...
I0805 16:12:06.565976    2979 qemu.go:418] Using hvf for hardware acceleration
I0805 16:12:06.566029    2979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:a9:e7:48:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/disk.qcow2
I0805 16:12:06.568653    2979 main.go:141] libmachine: STDOUT: 
I0805 16:12:06.568686    2979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 16:12:06.568718    2979 fix.go:56] duration metric: took 10.003666ms for fixHost
I0805 16:12:06.568723    2979 start.go:83] releasing machines lock for "ha-949000-m02", held for 10.023584ms
W0805 16:12:06.568731    2979 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 16:12:06.568763    2979 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 16:12:06.568769    2979 start.go:729] Will try again in 5 seconds ...
I0805 16:12:11.570998    2979 start.go:360] acquireMachinesLock for ha-949000-m02: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 16:12:11.571495    2979 start.go:364] duration metric: took 381.958µs to acquireMachinesLock for "ha-949000-m02"
I0805 16:12:11.571627    2979 start.go:96] Skipping create...Using existing machine configuration
I0805 16:12:11.571640    2979 fix.go:54] fixHost starting: m02
I0805 16:12:11.572162    2979 fix.go:112] recreateIfNeeded on ha-949000-m02: state=Stopped err=<nil>
W0805 16:12:11.572178    2979 fix.go:138] unexpected machine state, will restart: <nil>
I0805 16:12:11.576319    2979 out.go:177] * Restarting existing qemu2 VM for "ha-949000-m02" ...
I0805 16:12:11.580174    2979 qemu.go:418] Using hvf for hardware acceleration
I0805 16:12:11.580497    2979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:a9:e7:48:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/disk.qcow2
I0805 16:12:11.587068    2979 main.go:141] libmachine: STDOUT: 
I0805 16:12:11.587118    2979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 16:12:11.587219    2979 fix.go:56] duration metric: took 15.5785ms for fixHost
I0805 16:12:11.587236    2979 start.go:83] releasing machines lock for "ha-949000-m02", held for 15.724875ms
W0805 16:12:11.587393    2979 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 16:12:11.592282    2979 out.go:177] 
W0805 16:12:11.596379    2979 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 16:12:11.596397    2979 out.go:239] * 
* 
W0805 16:12:11.602468    2979 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 16:12:11.607261    2979 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-949000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
E0805 16:12:29.708723    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:12:48.887753    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr: exit status 7 (3m45.0658125s)

                                                
                                                
-- stdout --
	ha-949000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-949000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-949000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:12:11.667808    2983 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:12:11.667997    2983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:11.668004    2983 out.go:304] Setting ErrFile to fd 2...
	I0805 16:12:11.668007    2983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:11.668173    2983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:12:11.668318    2983 out.go:298] Setting JSON to false
	I0805 16:12:11.668331    2983 mustload.go:65] Loading cluster: ha-949000
	I0805 16:12:11.668367    2983 notify.go:220] Checking for updates...
	I0805 16:12:11.668602    2983 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:12:11.668608    2983 status.go:255] checking status of ha-949000 ...
	I0805 16:12:11.669413    2983 status.go:330] ha-949000 host status = "Running" (err=<nil>)
	I0805 16:12:11.669423    2983 host.go:66] Checking if "ha-949000" exists ...
	I0805 16:12:11.669528    2983 host.go:66] Checking if "ha-949000" exists ...
	I0805 16:12:11.669645    2983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:12:11.669653    2983 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/id_rsa Username:docker}
	W0805 16:13:26.668582    2983 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0805 16:13:26.668728    2983 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0805 16:13:26.668745    2983 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0805 16:13:26.668753    2983 status.go:257] ha-949000 status: &{Name:ha-949000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:13:26.668774    2983 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0805 16:13:26.668781    2983 status.go:255] checking status of ha-949000-m02 ...
	I0805 16:13:26.669201    2983 status.go:330] ha-949000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:13:26.669214    2983 status.go:343] host is not running, skipping remaining checks
	I0805 16:13:26.669218    2983 status.go:257] ha-949000-m02 status: &{Name:ha-949000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:13:26.669227    2983 status.go:255] checking status of ha-949000-m03 ...
	I0805 16:13:26.670354    2983 status.go:330] ha-949000-m03 host status = "Running" (err=<nil>)
	I0805 16:13:26.670369    2983 host.go:66] Checking if "ha-949000-m03" exists ...
	I0805 16:13:26.670594    2983 host.go:66] Checking if "ha-949000-m03" exists ...
	I0805 16:13:26.670821    2983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:13:26.670832    2983 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m03/id_rsa Username:docker}
	W0805 16:14:41.670480    2983 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0805 16:14:41.670570    2983 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0805 16:14:41.670584    2983 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0805 16:14:41.670591    2983 status.go:257] ha-949000-m03 status: &{Name:ha-949000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:14:41.670608    2983 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0805 16:14:41.670615    2983 status.go:255] checking status of ha-949000-m04 ...
	I0805 16:14:41.671779    2983 status.go:330] ha-949000-m04 host status = "Running" (err=<nil>)
	I0805 16:14:41.671792    2983 host.go:66] Checking if "ha-949000-m04" exists ...
	I0805 16:14:41.671979    2983 host.go:66] Checking if "ha-949000-m04" exists ...
	I0805 16:14:41.672174    2983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:14:41.672184    2983 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m04/id_rsa Username:docker}
	W0805 16:15:56.673839    2983 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0805 16:15:56.673932    2983 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0805 16:15:56.673951    2983 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0805 16:15:56.673961    2983 status.go:257] ha-949000-m04 status: &{Name:ha-949000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0805 16:15:56.673982    2983 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
E0805 16:16:06.637274    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 3 (1m15.05808875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:17:11.731566    3006 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0805 16:17:11.731588    3006 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-949000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-949000 -v=7 --alsologtostderr
E0805 16:21:06.613069    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:22:48.859568    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-949000 -v=7 --alsologtostderr: (5m27.175876916s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-949000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-949000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.228737125s)

                                                
                                                
-- stdout --
	* [ha-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-949000" primary control-plane node in "ha-949000" cluster
	* Restarting existing qemu2 VM for "ha-949000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-949000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:25:09.094641    3097 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:25:09.094848    3097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:09.094853    3097 out.go:304] Setting ErrFile to fd 2...
	I0805 16:25:09.094856    3097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:09.095032    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:25:09.096374    3097 out.go:298] Setting JSON to false
	I0805 16:25:09.117012    3097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3280,"bootTime":1722897029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:25:09.117070    3097 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:25:09.122292    3097 out.go:177] * [ha-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:25:09.130238    3097 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:25:09.130299    3097 notify.go:220] Checking for updates...
	I0805 16:25:09.136171    3097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:25:09.139233    3097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:25:09.140671    3097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:25:09.144184    3097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:25:09.147259    3097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:25:09.150542    3097 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:25:09.150602    3097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:25:09.155158    3097 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:25:09.162253    3097 start.go:297] selected driver: qemu2
	I0805 16:25:09.162261    3097 start.go:901] validating driver "qemu2" against &{Name:ha-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-949000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:25:09.162352    3097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:25:09.165177    3097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:25:09.165217    3097 cni.go:84] Creating CNI manager for ""
	I0805 16:25:09.165224    3097 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:25:09.165276    3097 start.go:340] cluster config:
	{Name:ha-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-949000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:25:09.169465    3097 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:25:09.178219    3097 out.go:177] * Starting "ha-949000" primary control-plane node in "ha-949000" cluster
	I0805 16:25:09.182176    3097 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:25:09.182208    3097 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:25:09.182221    3097 cache.go:56] Caching tarball of preloaded images
	I0805 16:25:09.182296    3097 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:25:09.182302    3097 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:25:09.182371    3097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/ha-949000/config.json ...
	I0805 16:25:09.182817    3097 start.go:360] acquireMachinesLock for ha-949000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:25:09.182852    3097 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-949000"
	I0805 16:25:09.182862    3097 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:25:09.182869    3097 fix.go:54] fixHost starting: 
	I0805 16:25:09.182989    3097 fix.go:112] recreateIfNeeded on ha-949000: state=Stopped err=<nil>
	W0805 16:25:09.182997    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:25:09.187210    3097 out.go:177] * Restarting existing qemu2 VM for "ha-949000" ...
	I0805 16:25:09.195227    3097 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:25:09.195261    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:1f:29:d1:46:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/disk.qcow2
	I0805 16:25:09.197364    3097 main.go:141] libmachine: STDOUT: 
	I0805 16:25:09.197381    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:25:09.197408    3097 fix.go:56] duration metric: took 14.540125ms for fixHost
	I0805 16:25:09.197413    3097 start.go:83] releasing machines lock for "ha-949000", held for 14.556584ms
	W0805 16:25:09.197419    3097 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:25:09.197456    3097 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:25:09.197461    3097 start.go:729] Will try again in 5 seconds ...
	I0805 16:25:14.199592    3097 start.go:360] acquireMachinesLock for ha-949000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:25:14.199927    3097 start.go:364] duration metric: took 259.375µs to acquireMachinesLock for "ha-949000"
	I0805 16:25:14.200050    3097 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:25:14.200066    3097 fix.go:54] fixHost starting: 
	I0805 16:25:14.200729    3097 fix.go:112] recreateIfNeeded on ha-949000: state=Stopped err=<nil>
	W0805 16:25:14.200760    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:25:14.205187    3097 out.go:177] * Restarting existing qemu2 VM for "ha-949000" ...
	I0805 16:25:14.213082    3097 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:25:14.213348    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:1f:29:d1:46:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000/disk.qcow2
	I0805 16:25:14.222144    3097 main.go:141] libmachine: STDOUT: 
	I0805 16:25:14.222222    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:25:14.222296    3097 fix.go:56] duration metric: took 22.23ms for fixHost
	I0805 16:25:14.222325    3097 start.go:83] releasing machines lock for "ha-949000", held for 22.375125ms
	W0805 16:25:14.222521    3097 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:25:14.230090    3097 out.go:177] 
	W0805 16:25:14.234119    3097 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:25:14.234154    3097 out.go:239] * 
	* 
	W0805 16:25:14.236737    3097 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:25:14.247154    3097 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-949000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-949000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 7 (33.50525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.737666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-949000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-949000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:25:14.387972    3110 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:25:14.388180    3110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.388183    3110 out.go:304] Setting ErrFile to fd 2...
	I0805 16:25:14.388185    3110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.388309    3110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:25:14.388551    3110 mustload.go:65] Loading cluster: ha-949000
	I0805 16:25:14.388767    3110 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0805 16:25:14.389066    3110 out.go:239] ! The control-plane node ha-949000 host is not running (will try others): state=Stopped
	! The control-plane node ha-949000 host is not running (will try others): state=Stopped
	W0805 16:25:14.389174    3110 out.go:239] ! The control-plane node ha-949000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-949000-m02 host is not running (will try others): state=Stopped
	I0805 16:25:14.394061    3110 out.go:177] * The control-plane node ha-949000-m03 host is not running: state=Stopped
	I0805 16:25:14.396888    3110 out.go:177]   To start a cluster, run: "minikube start -p ha-949000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-949000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr: exit status 7 (29.178166ms)

                                                
                                                
-- stdout --
	ha-949000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:25:14.427997    3112 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:25:14.428161    3112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.428164    3112 out.go:304] Setting ErrFile to fd 2...
	I0805 16:25:14.428166    3112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.428277    3112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:25:14.428394    3112 out.go:298] Setting JSON to false
	I0805 16:25:14.428404    3112 mustload.go:65] Loading cluster: ha-949000
	I0805 16:25:14.428462    3112 notify.go:220] Checking for updates...
	I0805 16:25:14.428641    3112 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:25:14.428647    3112 status.go:255] checking status of ha-949000 ...
	I0805 16:25:14.428857    3112 status.go:330] ha-949000 host status = "Stopped" (err=<nil>)
	I0805 16:25:14.428860    3112 status.go:343] host is not running, skipping remaining checks
	I0805 16:25:14.428863    3112 status.go:257] ha-949000 status: &{Name:ha-949000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:25:14.428873    3112 status.go:255] checking status of ha-949000-m02 ...
	I0805 16:25:14.428964    3112 status.go:330] ha-949000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:25:14.428967    3112 status.go:343] host is not running, skipping remaining checks
	I0805 16:25:14.428969    3112 status.go:257] ha-949000-m02 status: &{Name:ha-949000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:25:14.428973    3112 status.go:255] checking status of ha-949000-m03 ...
	I0805 16:25:14.429059    3112 status.go:330] ha-949000-m03 host status = "Stopped" (err=<nil>)
	I0805 16:25:14.429062    3112 status.go:343] host is not running, skipping remaining checks
	I0805 16:25:14.429064    3112 status.go:257] ha-949000-m03 status: &{Name:ha-949000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:25:14.429067    3112 status.go:255] checking status of ha-949000-m04 ...
	I0805 16:25:14.429163    3112 status.go:330] ha-949000-m04 host status = "Stopped" (err=<nil>)
	I0805 16:25:14.429165    3112 status.go:343] host is not running, skipping remaining checks
	I0805 16:25:14.429167    3112 status.go:257] ha-949000-m04 status: &{Name:ha-949000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 7 (29.498833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-949000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-949000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-949000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-949000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 7 (28.70375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (207.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 stop -v=7 --alsologtostderr
E0805 16:26:06.608194    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:27:48.853919    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 stop -v=7 --alsologtostderr: signal: killed (3m27.489070375s)

                                                
                                                
-- stdout --
	* Stopping node "ha-949000-m04"  ...
	* Stopping node "ha-949000-m03"  ...
	* Stopping node "ha-949000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:25:14.561888    3121 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:25:14.562085    3121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.562089    3121 out.go:304] Setting ErrFile to fd 2...
	I0805 16:25:14.562091    3121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:25:14.562222    3121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:25:14.562428    3121 out.go:298] Setting JSON to false
	I0805 16:25:14.562523    3121 mustload.go:65] Loading cluster: ha-949000
	I0805 16:25:14.562738    3121 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:25:14.562788    3121 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/ha-949000/config.json ...
	I0805 16:25:14.563024    3121 mustload.go:65] Loading cluster: ha-949000
	I0805 16:25:14.563104    3121 config.go:182] Loaded profile config "ha-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:25:14.563121    3121 stop.go:39] StopHost: ha-949000-m04
	I0805 16:25:14.567036    3121 out.go:177] * Stopping node "ha-949000-m04"  ...
	I0805 16:25:14.574854    3121 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 16:25:14.574884    3121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 16:25:14.574891    3121 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m04/id_rsa Username:docker}
	W0805 16:26:29.575791    3121 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0805 16:26:29.576056    3121 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0805 16:26:29.576209    3121 main.go:141] libmachine: Stopping "ha-949000-m04"...
	I0805 16:26:29.576371    3121 stop.go:66] stop err: Machine "ha-949000-m04" is already stopped.
	I0805 16:26:29.576397    3121 stop.go:69] host is already stopped
	I0805 16:26:29.576425    3121 stop.go:39] StopHost: ha-949000-m03
	I0805 16:26:29.586702    3121 out.go:177] * Stopping node "ha-949000-m03"  ...
	I0805 16:26:29.590728    3121 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 16:26:29.590848    3121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 16:26:29.590900    3121 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m03/id_rsa Username:docker}
	W0805 16:27:44.592186    3121 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0805 16:27:44.592405    3121 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0805 16:27:44.592472    3121 main.go:141] libmachine: Stopping "ha-949000-m03"...
	I0805 16:27:44.592623    3121 stop.go:66] stop err: Machine "ha-949000-m03" is already stopped.
	I0805 16:27:44.592651    3121 stop.go:69] host is already stopped
	I0805 16:27:44.592681    3121 stop.go:39] StopHost: ha-949000-m02
	I0805 16:27:44.602122    3121 out.go:177] * Stopping node "ha-949000-m02"  ...
	I0805 16:27:44.606139    3121 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 16:27:44.606300    3121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 16:27:44.606332    3121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/ha-949000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-949000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr: context deadline exceeded (2.333µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-949000 -n ha-949000: exit status 7 (69.239167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (207.56s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-569000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-569000 --driver=qemu2 : exit status 80 (9.863530542s)

                                                
                                                
-- stdout --
	* [image-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-569000" primary control-plane node in "image-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-569000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-569000 -n image-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-569000 -n image-569000: exit status 7 (66.95725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-500000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-500000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.162309208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"926058c2-1abb-40bb-addd-53128d451cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-500000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"457933b4-179f-4053-9f2d-29f094d505b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"c32a52ee-85dd-436b-9bc5-23b0512e136c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig"}}
	{"specversion":"1.0","id":"0f616b56-d4ef-4127-8a4f-e50afc02d72c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2a8c9a40-2c01-489f-b986-391ad7786cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03cd5304-c36b-4b90-a765-928282785526","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube"}}
	{"specversion":"1.0","id":"7fe37640-c028-4fda-bbc6-eef337c41f44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d1f3f6f-5bd9-4a1c-a472-7052d33c43d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbbf1b81-b6b1-4752-a24f-6c1bf806eac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ed2ff8e7-f53b-4018-8c22-50d51ada2e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-500000\" primary control-plane node in \"json-output-500000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f26c340-4daa-4a81-bf92-ad0dc5b826f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d88da140-f194-4a85-8212-cb3a15c1a758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-500000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6669cf4-5bd6-41f7-83b2-23b88c524f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"2467924e-6a97-410f-b642-cd0b22e9dcc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fb84ebc0-5f49-4a67-9adf-9f5a01317ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-500000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"41f951aa-f208-4756-9777-6f684eafe6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"5d8a33be-c345-43e1-906e-12b3e0fffc25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-500000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.16s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-500000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-500000 --output=json --user=testUser: exit status 83 (75.016541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e29683f6-a7a6-4174-a7ae-4560909869a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-500000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"d5a4607c-b2a9-4dc2-862a-b5ecc4f44d97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-500000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-500000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-500000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-500000 --output=json --user=testUser: exit status 83 (42.472416ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-500000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-500000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-500000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-022000 --driver=qemu2 
E0805 16:29:09.674672    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-022000 --driver=qemu2 : exit status 80 (9.84662525s)

                                                
                                                
-- stdout --
	* [first-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-022000" primary control-plane node in "first-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-022000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 16:29:16.916085 -0700 PDT m=+2547.343467834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-029000 -n second-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-029000 -n second-029000: exit status 85 (75.813167ms)

                                                
                                                
-- stdout --
	* Profile "second-029000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-029000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-029000" host is not running, skipping log retrieval (state="* Profile \"second-029000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-029000\"")
helpers_test.go:175: Cleaning up "second-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-029000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 16:29:17.10104 -0700 PDT m=+2547.528426001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-022000 -n first-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-022000 -n first-022000: exit status 7 (28.359791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-022000
--- FAIL: TestMinikubeProfile (10.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-128000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-128000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.154354875s)

                                                
                                                
-- stdout --
	* [mount-start-1-128000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-128000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-128000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-128000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-128000 -n mount-start-1-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-128000 -n mount-start-1-128000: exit status 7 (65.905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-860000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-860000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.89632925s)

                                                
                                                
-- stdout --
	* [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:29:27.632953    3356 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:29:27.633316    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:29:27.633320    3356 out.go:304] Setting ErrFile to fd 2...
	I0805 16:29:27.633323    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:29:27.633528    3356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:29:27.634846    3356 out.go:298] Setting JSON to false
	I0805 16:29:27.651097    3356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3538,"bootTime":1722897029,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:29:27.651176    3356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:29:27.657002    3356 out.go:177] * [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:29:27.664040    3356 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:29:27.664077    3356 notify.go:220] Checking for updates...
	I0805 16:29:27.671025    3356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:29:27.674021    3356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:29:27.677002    3356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:29:27.680015    3356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:29:27.683072    3356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:29:27.686133    3356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:29:27.689980    3356 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:29:27.696929    3356 start.go:297] selected driver: qemu2
	I0805 16:29:27.696937    3356 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:29:27.696944    3356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:29:27.699081    3356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:29:27.701997    3356 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:29:27.705054    3356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:29:27.705084    3356 cni.go:84] Creating CNI manager for ""
	I0805 16:29:27.705088    3356 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:29:27.705092    3356 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:29:27.705130    3356 start.go:340] cluster config:
	{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:29:27.708828    3356 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:29:27.715992    3356 out.go:177] * Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	I0805 16:29:27.732112    3356 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:29:27.732132    3356 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:29:27.732142    3356 cache.go:56] Caching tarball of preloaded images
	I0805 16:29:27.732238    3356 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:29:27.732244    3356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:29:27.732471    3356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/multinode-860000/config.json ...
	I0805 16:29:27.732485    3356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/multinode-860000/config.json: {Name:mk1da10de4978acc45d1a303bfe6c58e312cb22b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:29:27.732932    3356 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:29:27.732973    3356 start.go:364] duration metric: took 33.958µs to acquireMachinesLock for "multinode-860000"
	I0805 16:29:27.732989    3356 start.go:93] Provisioning new machine with config: &{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:29:27.733040    3356 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:29:27.741994    3356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:29:27.760635    3356 start.go:159] libmachine.API.Create for "multinode-860000" (driver="qemu2")
	I0805 16:29:27.760662    3356 client.go:168] LocalClient.Create starting
	I0805 16:29:27.760732    3356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:29:27.760761    3356 main.go:141] libmachine: Decoding PEM data...
	I0805 16:29:27.760771    3356 main.go:141] libmachine: Parsing certificate...
	I0805 16:29:27.760808    3356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:29:27.760830    3356 main.go:141] libmachine: Decoding PEM data...
	I0805 16:29:27.760836    3356 main.go:141] libmachine: Parsing certificate...
	I0805 16:29:27.761290    3356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:29:27.907261    3356 main.go:141] libmachine: Creating SSH key...
	I0805 16:29:28.035448    3356 main.go:141] libmachine: Creating Disk image...
	I0805 16:29:28.035454    3356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:29:28.035642    3356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:28.045105    3356 main.go:141] libmachine: STDOUT: 
	I0805 16:29:28.045120    3356 main.go:141] libmachine: STDERR: 
	I0805 16:29:28.045169    3356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2 +20000M
	I0805 16:29:28.053099    3356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:29:28.053115    3356 main.go:141] libmachine: STDERR: 
	I0805 16:29:28.053133    3356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:28.053138    3356 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:29:28.053154    3356 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:29:28.053185    3356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ba:0a:19:0e:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:28.054864    3356 main.go:141] libmachine: STDOUT: 
	I0805 16:29:28.054879    3356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:29:28.054897    3356 client.go:171] duration metric: took 294.235667ms to LocalClient.Create
	I0805 16:29:30.057061    3356 start.go:128] duration metric: took 2.324035084s to createHost
	I0805 16:29:30.057135    3356 start.go:83] releasing machines lock for "multinode-860000", held for 2.324197791s
	W0805 16:29:30.057285    3356 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:29:30.070384    3356 out.go:177] * Deleting "multinode-860000" in qemu2 ...
	W0805 16:29:30.099050    3356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:29:30.099078    3356 start.go:729] Will try again in 5 seconds ...
	I0805 16:29:35.101219    3356 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:29:35.101613    3356 start.go:364] duration metric: took 300.416µs to acquireMachinesLock for "multinode-860000"
	I0805 16:29:35.101729    3356 start.go:93] Provisioning new machine with config: &{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:29:35.102020    3356 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:29:35.106658    3356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:29:35.156430    3356 start.go:159] libmachine.API.Create for "multinode-860000" (driver="qemu2")
	I0805 16:29:35.156475    3356 client.go:168] LocalClient.Create starting
	I0805 16:29:35.156588    3356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:29:35.156659    3356 main.go:141] libmachine: Decoding PEM data...
	I0805 16:29:35.156679    3356 main.go:141] libmachine: Parsing certificate...
	I0805 16:29:35.156738    3356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:29:35.156790    3356 main.go:141] libmachine: Decoding PEM data...
	I0805 16:29:35.156803    3356 main.go:141] libmachine: Parsing certificate...
	I0805 16:29:35.157488    3356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:29:35.312790    3356 main.go:141] libmachine: Creating SSH key...
	I0805 16:29:35.433870    3356 main.go:141] libmachine: Creating Disk image...
	I0805 16:29:35.433876    3356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:29:35.434086    3356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:35.443533    3356 main.go:141] libmachine: STDOUT: 
	I0805 16:29:35.443549    3356 main.go:141] libmachine: STDERR: 
	I0805 16:29:35.443607    3356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2 +20000M
	I0805 16:29:35.451503    3356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:29:35.451525    3356 main.go:141] libmachine: STDERR: 
	I0805 16:29:35.451535    3356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:35.451541    3356 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:29:35.451549    3356 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:29:35.451582    3356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a9:ee:e0:3f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:29:35.453234    3356 main.go:141] libmachine: STDOUT: 
	I0805 16:29:35.453250    3356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:29:35.453264    3356 client.go:171] duration metric: took 296.788708ms to LocalClient.Create
	I0805 16:29:37.455491    3356 start.go:128] duration metric: took 2.353488125s to createHost
	I0805 16:29:37.455541    3356 start.go:83] releasing machines lock for "multinode-860000", held for 2.353949875s
	W0805 16:29:37.455884    3356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:29:37.468327    3356 out.go:177] 
	W0805 16:29:37.478569    3356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:29:37.478605    3356 out.go:239] * 
	* 
	W0805 16:29:37.481402    3356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:29:37.488435    3356 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-860000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (66.294333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (94.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.828792ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-860000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- rollout status deployment/busybox: exit status 1 (57.368042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.440083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.34975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.398208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.896625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.572958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.54775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.288666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.441167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.883209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0805 16:31:06.602088    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.572ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.800041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.36ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.557625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.688292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.593083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (94.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-860000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.084083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.896208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-860000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-860000 -v 3 --alsologtostderr: exit status 83 (42.714208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-860000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:12.386626    3793 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:12.386796    3793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.386799    3793 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:12.386802    3793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.386926    3793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:12.387170    3793 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:12.387368    3793 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:12.392247    3793 out.go:177] * The control-plane node multinode-860000 host is not running: state=Stopped
	I0805 16:31:12.396281    3793 out.go:177]   To start a cluster, run: "minikube start -p multinode-860000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-860000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (29.650042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-860000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-860000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.330542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-860000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-860000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-860000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-860000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-860000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-860000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-860000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.658958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status --output json --alsologtostderr: exit status 7 (29.160875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-860000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:12.592751    3805 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:12.592921    3805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.592928    3805 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:12.592930    3805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.593063    3805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:12.593174    3805 out.go:298] Setting JSON to true
	I0805 16:31:12.593184    3805 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:12.593251    3805 notify.go:220] Checking for updates...
	I0805 16:31:12.593371    3805 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:12.593377    3805 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:12.593601    3805 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:12.593605    3805 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:12.593607    3805 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-860000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (29.020208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 node stop m03: exit status 85 (42.023583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-860000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status: exit status 7 (28.979667ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr: exit status 7 (29.572666ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:12.723096    3813 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:12.723267    3813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.723270    3813 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:12.723273    3813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.723421    3813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:12.723544    3813 out.go:298] Setting JSON to false
	I0805 16:31:12.723556    3813 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:12.723624    3813 notify.go:220] Checking for updates...
	I0805 16:31:12.723783    3813 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:12.723788    3813 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:12.723990    3813 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:12.723994    3813 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:12.723996    3813 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr": multinode-860000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.761458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 node start m03 -v=7 --alsologtostderr: exit status 85 (43.079333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:12.780995    3817 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:12.781239    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.781242    3817 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:12.781244    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.781397    3817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:12.781639    3817 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:12.781836    3817 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:12.785301    3817 out.go:177] 
	W0805 16:31:12.788253    3817 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0805 16:31:12.788258    3817 out.go:239] * 
	* 
	W0805 16:31:12.789893    3817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:31:12.793219    3817 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0805 16:31:12.780995    3817 out.go:291] Setting OutFile to fd 1 ...
I0805 16:31:12.781239    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:31:12.781242    3817 out.go:304] Setting ErrFile to fd 2...
I0805 16:31:12.781244    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:31:12.781397    3817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 16:31:12.781639    3817 mustload.go:65] Loading cluster: multinode-860000
I0805 16:31:12.781836    3817 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:31:12.785301    3817 out.go:177] 
W0805 16:31:12.788253    3817 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0805 16:31:12.788258    3817 out.go:239] * 
* 
W0805 16:31:12.789893    3817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 16:31:12.793219    3817 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-860000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (29.399291ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:12.824809    3819 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:12.824959    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.824966    3819 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:12.824969    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:12.825107    3819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:12.825221    3819 out.go:298] Setting JSON to false
	I0805 16:31:12.825233    3819 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:12.825291    3819 notify.go:220] Checking for updates...
	I0805 16:31:12.825441    3819 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:12.825449    3819 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:12.825660    3819 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:12.825664    3819 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:12.825666    3819 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (75.410291ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:13.869241    3821 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:13.869428    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:13.869433    3821 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:13.869436    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:13.869615    3821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:13.869765    3821 out.go:298] Setting JSON to false
	I0805 16:31:13.869776    3821 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:13.869814    3821 notify.go:220] Checking for updates...
	I0805 16:31:13.870032    3821 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:13.870039    3821 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:13.870312    3821 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:13.870317    3821 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:13.870320    3821 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (72.361208ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:14.765747    3823 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:14.765919    3823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:14.765923    3823 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:14.765926    3823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:14.766090    3823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:14.766251    3823 out.go:298] Setting JSON to false
	I0805 16:31:14.766268    3823 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:14.766302    3823 notify.go:220] Checking for updates...
	I0805 16:31:14.766525    3823 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:14.766532    3823 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:14.766817    3823 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:14.766821    3823 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:14.766824    3823 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (71.193417ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:16.559615    3826 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:16.559851    3826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:16.559856    3826 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:16.559860    3826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:16.560044    3826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:16.560198    3826 out.go:298] Setting JSON to false
	I0805 16:31:16.560211    3826 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:16.560254    3826 notify.go:220] Checking for updates...
	I0805 16:31:16.560466    3826 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:16.560473    3826 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:16.560742    3826 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:16.560747    3826 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:16.560750    3826 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (71.967ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:19.411967    3830 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:19.412201    3830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:19.412206    3830 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:19.412209    3830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:19.412378    3830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:19.412524    3830 out.go:298] Setting JSON to false
	I0805 16:31:19.412537    3830 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:19.412577    3830 notify.go:220] Checking for updates...
	I0805 16:31:19.412770    3830 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:19.412778    3830 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:19.413105    3830 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:19.413110    3830 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:19.413113    3830 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (72.881791ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:22.869800    3835 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:22.870029    3835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:22.870034    3835 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:22.870037    3835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:22.870201    3835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:22.870376    3835 out.go:298] Setting JSON to false
	I0805 16:31:22.870389    3835 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:22.870434    3835 notify.go:220] Checking for updates...
	I0805 16:31:22.870678    3835 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:22.870686    3835 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:22.871010    3835 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:22.871015    3835 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:22.871019    3835 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (72.543875ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:27.699720    3837 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:27.699925    3837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:27.699931    3837 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:27.699935    3837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:27.700115    3837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:27.700286    3837 out.go:298] Setting JSON to false
	I0805 16:31:27.700301    3837 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:27.700334    3837 notify.go:220] Checking for updates...
	I0805 16:31:27.700602    3837 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:27.700609    3837 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:27.700917    3837 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:27.700922    3837 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:27.700926    3837 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (74.627917ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:31:42.130214    3845 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:31:42.130463    3845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:42.130468    3845 out.go:304] Setting ErrFile to fd 2...
	I0805 16:31:42.130471    3845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:31:42.130662    3845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:31:42.130827    3845 out.go:298] Setting JSON to false
	I0805 16:31:42.130841    3845 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:31:42.130875    3845 notify.go:220] Checking for updates...
	I0805 16:31:42.131116    3845 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:31:42.131128    3845 status.go:255] checking status of multinode-860000 ...
	I0805 16:31:42.131414    3845 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:31:42.131419    3845 status.go:343] host is not running, skipping remaining checks
	I0805 16:31:42.131422    3845 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr: exit status 7 (73.76975ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:00.933977    3853 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:00.934192    3853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:00.934197    3853 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:00.934200    3853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:00.934419    3853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:00.934612    3853 out.go:298] Setting JSON to false
	I0805 16:32:00.934626    3853 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:32:00.934669    3853 notify.go:220] Checking for updates...
	I0805 16:32:00.934883    3853 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:00.934890    3853 status.go:255] checking status of multinode-860000 ...
	I0805 16:32:00.935170    3853 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:32:00.935176    3853 status.go:343] host is not running, skipping remaining checks
	I0805 16:32:00.935179    3853 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-860000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (33.001208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-860000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-860000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-860000: (2.129436208s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-860000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-860000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219838208s)

                                                
                                                
-- stdout --
	* [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	* Restarting existing qemu2 VM for "multinode-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:03.190838    3871 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:03.191023    3871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:03.191028    3871 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:03.191031    3871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:03.191207    3871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:03.192413    3871 out.go:298] Setting JSON to false
	I0805 16:32:03.211682    3871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3694,"bootTime":1722897029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:32:03.211756    3871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:32:03.217094    3871 out.go:177] * [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:32:03.224064    3871 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:32:03.224097    3871 notify.go:220] Checking for updates...
	I0805 16:32:03.231037    3871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:32:03.233937    3871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:32:03.237071    3871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:32:03.240041    3871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:32:03.243018    3871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:32:03.246258    3871 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:03.246310    3871 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:32:03.250022    3871 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:32:03.256938    3871 start.go:297] selected driver: qemu2
	I0805 16:32:03.256943    3871 start.go:901] validating driver "qemu2" against &{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:32:03.256992    3871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:32:03.259579    3871 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:32:03.259613    3871 cni.go:84] Creating CNI manager for ""
	I0805 16:32:03.259619    3871 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:32:03.259684    3871 start.go:340] cluster config:
	{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:32:03.263464    3871 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:03.271040    3871 out.go:177] * Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	I0805 16:32:03.274993    3871 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:32:03.275011    3871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:32:03.275017    3871 cache.go:56] Caching tarball of preloaded images
	I0805 16:32:03.275077    3871 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:32:03.275082    3871 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:32:03.275134    3871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/multinode-860000/config.json ...
	I0805 16:32:03.275578    3871 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:03.275614    3871 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "multinode-860000"
	I0805 16:32:03.275622    3871 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:32:03.275631    3871 fix.go:54] fixHost starting: 
	I0805 16:32:03.275755    3871 fix.go:112] recreateIfNeeded on multinode-860000: state=Stopped err=<nil>
	W0805 16:32:03.275764    3871 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:32:03.283999    3871 out.go:177] * Restarting existing qemu2 VM for "multinode-860000" ...
	I0805 16:32:03.287961    3871 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:03.287998    3871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a9:ee:e0:3f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:32:03.290158    3871 main.go:141] libmachine: STDOUT: 
	I0805 16:32:03.290177    3871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:03.290205    3871 fix.go:56] duration metric: took 14.574958ms for fixHost
	I0805 16:32:03.290210    3871 start.go:83] releasing machines lock for "multinode-860000", held for 14.591958ms
	W0805 16:32:03.290217    3871 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:32:03.290249    3871 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:03.290254    3871 start.go:729] Will try again in 5 seconds ...
	I0805 16:32:08.292389    3871 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:08.292872    3871 start.go:364] duration metric: took 322.25µs to acquireMachinesLock for "multinode-860000"
	I0805 16:32:08.293009    3871 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:32:08.293029    3871 fix.go:54] fixHost starting: 
	I0805 16:32:08.293773    3871 fix.go:112] recreateIfNeeded on multinode-860000: state=Stopped err=<nil>
	W0805 16:32:08.293802    3871 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:32:08.298277    3871 out.go:177] * Restarting existing qemu2 VM for "multinode-860000" ...
	I0805 16:32:08.306214    3871 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:08.306450    3871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a9:ee:e0:3f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:32:08.315836    3871 main.go:141] libmachine: STDOUT: 
	I0805 16:32:08.315899    3871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:08.315990    3871 fix.go:56] duration metric: took 22.964541ms for fixHost
	I0805 16:32:08.316009    3871 start.go:83] releasing machines lock for "multinode-860000", held for 23.113792ms
	W0805 16:32:08.316251    3871 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:08.321536    3871 out.go:177] 
	W0805 16:32:08.325256    3871 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:32:08.325286    3871 out.go:239] * 
	* 
	W0805 16:32:08.327875    3871 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:32:08.336243    3871 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-860000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-860000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (32.37225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 node delete m03: exit status 83 (39.05275ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-860000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-860000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr: exit status 7 (28.572667ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:08.515627    3887 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:08.515789    3887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:08.515792    3887 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:08.515795    3887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:08.515925    3887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:08.516028    3887 out.go:298] Setting JSON to false
	I0805 16:32:08.516037    3887 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:32:08.516103    3887 notify.go:220] Checking for updates...
	I0805 16:32:08.516225    3887 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:08.516231    3887 status.go:255] checking status of multinode-860000 ...
	I0805 16:32:08.516444    3887 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:32:08.516447    3887 status.go:343] host is not running, skipping remaining checks
	I0805 16:32:08.516450    3887 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (28.594666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-860000 stop: (2.053454375s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status: exit status 7 (61.232125ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr: exit status 7 (33.312708ms)

                                                
                                                
-- stdout --
	multinode-860000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:10.692147    3907 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:10.692528    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:10.692541    3907 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:10.692544    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:10.692743    3907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:10.692906    3907 out.go:298] Setting JSON to false
	I0805 16:32:10.692919    3907 mustload.go:65] Loading cluster: multinode-860000
	I0805 16:32:10.693052    3907 notify.go:220] Checking for updates...
	I0805 16:32:10.693349    3907 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:10.693357    3907 status.go:255] checking status of multinode-860000 ...
	I0805 16:32:10.693543    3907 status.go:330] multinode-860000 host status = "Stopped" (err=<nil>)
	I0805 16:32:10.693547    3907 status.go:343] host is not running, skipping remaining checks
	I0805 16:32:10.693549    3907 status.go:257] multinode-860000 status: &{Name:multinode-860000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr": multinode-860000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-860000 status --alsologtostderr": multinode-860000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (29.435334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-860000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-860000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.172399833s)

                                                
                                                
-- stdout --
	* [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	* Restarting existing qemu2 VM for "multinode-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:10.750820    3911 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:10.750939    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:10.750943    3911 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:10.750946    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:10.751072    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:10.752104    3911 out.go:298] Setting JSON to false
	I0805 16:32:10.768026    3911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3701,"bootTime":1722897029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:32:10.768095    3911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:32:10.772849    3911 out.go:177] * [multinode-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:32:10.779841    3911 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:32:10.779897    3911 notify.go:220] Checking for updates...
	I0805 16:32:10.786862    3911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:32:10.789852    3911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:32:10.792846    3911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:32:10.795819    3911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:32:10.798847    3911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:32:10.802041    3911 config.go:182] Loaded profile config "multinode-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:10.802302    3911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:32:10.806813    3911 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:32:10.812713    3911 start.go:297] selected driver: qemu2
	I0805 16:32:10.812722    3911 start.go:901] validating driver "qemu2" against &{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:32:10.812789    3911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:32:10.814969    3911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:32:10.814989    3911 cni.go:84] Creating CNI manager for ""
	I0805 16:32:10.814993    3911 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:32:10.815034    3911 start.go:340] cluster config:
	{Name:multinode-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-860000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:32:10.818487    3911 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:10.825902    3911 out.go:177] * Starting "multinode-860000" primary control-plane node in "multinode-860000" cluster
	I0805 16:32:10.829798    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:32:10.829816    3911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:32:10.829825    3911 cache.go:56] Caching tarball of preloaded images
	I0805 16:32:10.829888    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:32:10.829894    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:32:10.829951    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/multinode-860000/config.json ...
	I0805 16:32:10.830399    3911 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:10.830427    3911 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "multinode-860000"
	I0805 16:32:10.830436    3911 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:32:10.830442    3911 fix.go:54] fixHost starting: 
	I0805 16:32:10.830555    3911 fix.go:112] recreateIfNeeded on multinode-860000: state=Stopped err=<nil>
	W0805 16:32:10.830563    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:32:10.837758    3911 out.go:177] * Restarting existing qemu2 VM for "multinode-860000" ...
	I0805 16:32:10.841845    3911 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:10.841888    3911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a9:ee:e0:3f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:32:10.843948    3911 main.go:141] libmachine: STDOUT: 
	I0805 16:32:10.843970    3911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:10.843996    3911 fix.go:56] duration metric: took 13.555083ms for fixHost
	I0805 16:32:10.844000    3911 start.go:83] releasing machines lock for "multinode-860000", held for 13.569125ms
	W0805 16:32:10.844006    3911 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:32:10.844040    3911 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:10.844045    3911 start.go:729] Will try again in 5 seconds ...
	I0805 16:32:15.846153    3911 start.go:360] acquireMachinesLock for multinode-860000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:15.846502    3911 start.go:364] duration metric: took 223.25µs to acquireMachinesLock for "multinode-860000"
	I0805 16:32:15.846595    3911 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:32:15.846615    3911 fix.go:54] fixHost starting: 
	I0805 16:32:15.847104    3911 fix.go:112] recreateIfNeeded on multinode-860000: state=Stopped err=<nil>
	W0805 16:32:15.847124    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:32:15.855517    3911 out.go:177] * Restarting existing qemu2 VM for "multinode-860000" ...
	I0805 16:32:15.858473    3911 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:15.858633    3911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a9:ee:e0:3f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/multinode-860000/disk.qcow2
	I0805 16:32:15.863596    3911 main.go:141] libmachine: STDOUT: 
	I0805 16:32:15.863647    3911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:15.863722    3911 fix.go:56] duration metric: took 17.113583ms for fixHost
	I0805 16:32:15.863737    3911 start.go:83] releasing machines lock for "multinode-860000", held for 17.215625ms
	W0805 16:32:15.863892    3911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:15.870446    3911 out.go:177] 
	W0805 16:32:15.874494    3911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:32:15.874517    3911 out.go:239] * 
	* 
	W0805 16:32:15.875924    3911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:32:15.884466    3911 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-860000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (71.381333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-860000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-860000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-860000-m01 --driver=qemu2 : exit status 80 (9.836465916s)

                                                
                                                
-- stdout --
	* [multinode-860000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-860000-m01" primary control-plane node in "multinode-860000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-860000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-860000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-860000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-860000-m02 --driver=qemu2 : exit status 80 (9.972802083s)

                                                
                                                
-- stdout --
	* [multinode-860000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-860000-m02" primary control-plane node in "multinode-860000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-860000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-860000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-860000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-860000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-860000: exit status 83 (76.207125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-860000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-860000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-860000 -n multinode-860000: exit status 7 (29.965875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.944342834s)

                                                
                                                
-- stdout --
	* [test-preload-448000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-448000" primary control-plane node in "test-preload-448000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-448000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:32:36.135450    3982 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:32:36.135585    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:36.135589    3982 out.go:304] Setting ErrFile to fd 2...
	I0805 16:32:36.135592    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:32:36.135718    3982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:32:36.136815    3982 out.go:298] Setting JSON to false
	I0805 16:32:36.152717    3982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3727,"bootTime":1722897029,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:32:36.152783    3982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:32:36.159440    3982 out.go:177] * [test-preload-448000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:32:36.167554    3982 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:32:36.167600    3982 notify.go:220] Checking for updates...
	I0805 16:32:36.175451    3982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:32:36.178480    3982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:32:36.181472    3982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:32:36.184448    3982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:32:36.187537    3982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:32:36.190794    3982 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:32:36.190859    3982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:32:36.195466    3982 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:32:36.202363    3982 start.go:297] selected driver: qemu2
	I0805 16:32:36.202372    3982 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:32:36.202379    3982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:32:36.204741    3982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:32:36.207456    3982 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:32:36.210508    3982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:32:36.210523    3982 cni.go:84] Creating CNI manager for ""
	I0805 16:32:36.210530    3982 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:32:36.210535    3982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:32:36.210582    3982 start.go:340] cluster config:
	{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:32:36.214326    3982 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.222446    3982 out.go:177] * Starting "test-preload-448000" primary control-plane node in "test-preload-448000" cluster
	I0805 16:32:36.226456    3982 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0805 16:32:36.226544    3982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/test-preload-448000/config.json ...
	I0805 16:32:36.226563    3982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/test-preload-448000/config.json: {Name:mk65d421a58178adf07dc6bc97cc7aa372ed2e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:32:36.226571    3982 cache.go:107] acquiring lock: {Name:mkdb304c7bbd79570fe8e51264f4688630824a9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226570    3982 cache.go:107] acquiring lock: {Name:mk0c34c28047fa734166bc409e2020c48e54df4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226590    3982 cache.go:107] acquiring lock: {Name:mkf5a0a9d3b330464d842e912bade7e43555585b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226833    3982 cache.go:107] acquiring lock: {Name:mk2537d290e771871d32e35fa68185179945e728 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226876    3982 cache.go:107] acquiring lock: {Name:mkb77bd5ff7c209778d67ee6e1892517c171190e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226877    3982 cache.go:107] acquiring lock: {Name:mkb7e5971cb7ea4741c7162e6a6a7434194f0d81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226890    3982 cache.go:107] acquiring lock: {Name:mk7acee541b692625dbdc1292b88aa648d42145e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226908    3982 cache.go:107] acquiring lock: {Name:mkce73e997715c070724d7de491e8944c46e0eb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:32:36.226966    3982 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 16:32:36.226965    3982 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 16:32:36.226984    3982 start.go:360] acquireMachinesLock for test-preload-448000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:36.227006    3982 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:32:36.227040    3982 start.go:364] duration metric: took 47.25µs to acquireMachinesLock for "test-preload-448000"
	I0805 16:32:36.227053    3982 start.go:93] Provisioning new machine with config: &{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:32:36.227088    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:32:36.227118    3982 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 16:32:36.227218    3982 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 16:32:36.227621    3982 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:32:36.231840    3982 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 16:32:36.231844    3982 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:32:36.235401    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:32:36.239776    3982 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 16:32:36.239870    3982 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 16:32:36.240069    3982 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 16:32:36.240431    3982 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:32:36.241846    3982 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:32:36.241845    3982 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 16:32:36.241874    3982 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 16:32:36.242217    3982 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:32:36.255103    3982 start.go:159] libmachine.API.Create for "test-preload-448000" (driver="qemu2")
	I0805 16:32:36.255124    3982 client.go:168] LocalClient.Create starting
	I0805 16:32:36.255208    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:32:36.255240    3982 main.go:141] libmachine: Decoding PEM data...
	I0805 16:32:36.255249    3982 main.go:141] libmachine: Parsing certificate...
	I0805 16:32:36.255288    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:32:36.255312    3982 main.go:141] libmachine: Decoding PEM data...
	I0805 16:32:36.255320    3982 main.go:141] libmachine: Parsing certificate...
	I0805 16:32:36.255693    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:32:36.410693    3982 main.go:141] libmachine: Creating SSH key...
	I0805 16:32:36.600044    3982 main.go:141] libmachine: Creating Disk image...
	I0805 16:32:36.600069    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:32:36.600293    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:36.610492    3982 main.go:141] libmachine: STDOUT: 
	I0805 16:32:36.610508    3982 main.go:141] libmachine: STDERR: 
	I0805 16:32:36.610561    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2 +20000M
	I0805 16:32:36.619241    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:32:36.619254    3982 main.go:141] libmachine: STDERR: 
	I0805 16:32:36.619264    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:36.619268    3982 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:32:36.619277    3982 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:36.619304    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:bb:8e:16:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:36.621055    3982 main.go:141] libmachine: STDOUT: 
	I0805 16:32:36.621077    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:36.621092    3982 client.go:171] duration metric: took 365.972541ms to LocalClient.Create
	I0805 16:32:36.662846    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0805 16:32:36.683358    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0805 16:32:36.699490    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0805 16:32:36.738986    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 16:32:36.751968    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 16:32:36.807667    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0805 16:32:36.820916    3982 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 16:32:36.820946    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 16:32:36.874907    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0805 16:32:36.874942    3982 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 648.042125ms
	I0805 16:32:36.874973    3982 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0805 16:32:37.142428    3982 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 16:32:37.142523    3982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 16:32:37.416901    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 16:32:37.416963    3982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.190404167s
	I0805 16:32:37.416988    3982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 16:32:38.587900    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0805 16:32:38.587969    3982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.361187833s
	I0805 16:32:38.587999    3982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0805 16:32:38.621281    3982 start.go:128] duration metric: took 2.394224625s to createHost
	I0805 16:32:38.621334    3982 start.go:83] releasing machines lock for "test-preload-448000", held for 2.394333542s
	W0805 16:32:38.621403    3982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:38.632904    3982 out.go:177] * Deleting "test-preload-448000" in qemu2 ...
	W0805 16:32:38.662054    3982 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:38.662084    3982 start.go:729] Will try again in 5 seconds ...
	I0805 16:32:39.178154    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0805 16:32:39.178213    3982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.951453875s
	I0805 16:32:39.178241    3982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0805 16:32:40.678888    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0805 16:32:40.678933    3982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.452453875s
	I0805 16:32:40.678954    3982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0805 16:32:41.292739    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0805 16:32:41.292817    3982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.06606275s
	I0805 16:32:41.292849    3982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0805 16:32:42.076908    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0805 16:32:42.076988    3982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.850517666s
	I0805 16:32:42.077019    3982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0805 16:32:43.662133    3982 start.go:360] acquireMachinesLock for test-preload-448000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:32:43.662466    3982 start.go:364] duration metric: took 254.625µs to acquireMachinesLock for "test-preload-448000"
	I0805 16:32:43.662552    3982 start.go:93] Provisioning new machine with config: &{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:32:43.662723    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:32:43.672250    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:32:43.717877    3982 start.go:159] libmachine.API.Create for "test-preload-448000" (driver="qemu2")
	I0805 16:32:43.717975    3982 client.go:168] LocalClient.Create starting
	I0805 16:32:43.718100    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:32:43.718163    3982 main.go:141] libmachine: Decoding PEM data...
	I0805 16:32:43.718183    3982 main.go:141] libmachine: Parsing certificate...
	I0805 16:32:43.718262    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:32:43.718319    3982 main.go:141] libmachine: Decoding PEM data...
	I0805 16:32:43.718334    3982 main.go:141] libmachine: Parsing certificate...
	I0805 16:32:43.718893    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:32:43.878654    3982 main.go:141] libmachine: Creating SSH key...
	I0805 16:32:43.982722    3982 main.go:141] libmachine: Creating Disk image...
	I0805 16:32:43.982727    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:32:43.982918    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:43.992472    3982 main.go:141] libmachine: STDOUT: 
	I0805 16:32:43.992499    3982 main.go:141] libmachine: STDERR: 
	I0805 16:32:43.992564    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2 +20000M
	I0805 16:32:44.000721    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:32:44.000736    3982 main.go:141] libmachine: STDERR: 
	I0805 16:32:44.000748    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:44.000754    3982 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:32:44.000774    3982 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:32:44.000803    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:36:ec:bb:53:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/test-preload-448000/disk.qcow2
	I0805 16:32:44.002562    3982 main.go:141] libmachine: STDOUT: 
	I0805 16:32:44.002580    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:32:44.002593    3982 client.go:171] duration metric: took 284.616959ms to LocalClient.Create
	I0805 16:32:45.225260    3982 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0805 16:32:45.225332    3982 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.998718s
	I0805 16:32:45.225358    3982 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0805 16:32:45.225405    3982 cache.go:87] Successfully saved all images to host disk.
	I0805 16:32:46.004820    3982 start.go:128] duration metric: took 2.342099583s to createHost
	I0805 16:32:46.004900    3982 start.go:83] releasing machines lock for "test-preload-448000", held for 2.342460125s
	W0805 16:32:46.005275    3982 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:32:46.020868    3982 out.go:177] 
	W0805 16:32:46.023826    3982 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:32:46.023851    3982 out.go:239] * 
	* 
	W0805 16:32:46.026499    3982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:32:46.037833    3982 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-05 16:32:46.056132 -0700 PDT m=+2756.487729293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-448000 -n test-preload-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-448000 -n test-preload-448000: exit status 7 (64.491583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-448000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-448000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-448000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (9.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-710000 --memory=2048 --driver=qemu2 
E0805 16:32:48.847848    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-710000 --memory=2048 --driver=qemu2 : exit status 80 (9.734915833s)

                                                
                                                
-- stdout --
	* [scheduled-stop-710000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-710000" primary control-plane node in "scheduled-stop-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-710000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-710000" primary control-plane node in "scheduled-stop-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-05 16:32:55.938117 -0700 PDT m=+2766.369913209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-710000 -n scheduled-stop-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-710000 -n scheduled-stop-710000: exit status 7 (65.5485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-710000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-710000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-710000
--- FAIL: TestScheduledStopUnix (9.88s)

                                                
                                    
x
+
TestSkaffold (12.37s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1928388772 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1928388772 version: (1.07077875s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-518000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-518000 --memory=2600 --driver=qemu2 : exit status 80 (9.897781292s)

                                                
                                                
-- stdout --
	* [skaffold-518000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-518000" primary control-plane node in "skaffold-518000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-518000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-518000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-518000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-518000" primary control-plane node in "skaffold-518000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-518000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-518000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-05 16:33:08.301647 -0700 PDT m=+2778.733692876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-518000 -n skaffold-518000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-518000 -n skaffold-518000: exit status 7 (61.141459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-518000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-518000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-518000
--- FAIL: TestSkaffold (12.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3106976708 start -p running-upgrade-230000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3106976708 start -p running-upgrade-230000 --memory=2200 --vm-driver=qemu2 : (52.409279458s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-230000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0805 16:35:51.915193    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:36:06.596101    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-230000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.878083166s)

                                                
                                                
-- stdout --
	* [running-upgrade-230000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-230000" primary control-plane node in "running-upgrade-230000" cluster
	* Updating the running qemu2 "running-upgrade-230000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:34:43.858420    4412 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:34:43.858553    4412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:34:43.858558    4412 out.go:304] Setting ErrFile to fd 2...
	I0805 16:34:43.858562    4412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:34:43.858671    4412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:34:43.859773    4412 out.go:298] Setting JSON to false
	I0805 16:34:43.876419    4412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3854,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:34:43.876497    4412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:34:43.880837    4412 out.go:177] * [running-upgrade-230000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:34:43.887822    4412 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:34:43.887879    4412 notify.go:220] Checking for updates...
	I0805 16:34:43.893692    4412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:34:43.896696    4412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:34:43.898095    4412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:34:43.900701    4412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:34:43.903763    4412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:34:43.907056    4412 config.go:182] Loaded profile config "running-upgrade-230000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:34:43.910684    4412 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 16:34:43.913772    4412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:34:43.917557    4412 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:34:43.924690    4412 start.go:297] selected driver: qemu2
	I0805 16:34:43.924696    4412 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-230000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50282 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-230000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:34:43.924744    4412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:34:43.927006    4412 cni.go:84] Creating CNI manager for ""
	I0805 16:34:43.927022    4412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:34:43.927043    4412 start.go:340] cluster config:
	{Name:running-upgrade-230000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50282 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-230000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:34:43.927094    4412 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:34:43.934682    4412 out.go:177] * Starting "running-upgrade-230000" primary control-plane node in "running-upgrade-230000" cluster
	I0805 16:34:43.938652    4412 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:34:43.938665    4412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 16:34:43.938673    4412 cache.go:56] Caching tarball of preloaded images
	I0805 16:34:43.938724    4412 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:34:43.938729    4412 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 16:34:43.938782    4412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/config.json ...
	I0805 16:34:43.939129    4412 start.go:360] acquireMachinesLock for running-upgrade-230000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:34:43.939161    4412 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "running-upgrade-230000"
	I0805 16:34:43.939168    4412 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:34:43.939175    4412 fix.go:54] fixHost starting: 
	I0805 16:34:43.939841    4412 fix.go:112] recreateIfNeeded on running-upgrade-230000: state=Running err=<nil>
	W0805 16:34:43.939857    4412 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:34:43.943663    4412 out.go:177] * Updating the running qemu2 "running-upgrade-230000" VM ...
	I0805 16:34:43.951671    4412 machine.go:94] provisionDockerMachine start ...
	I0805 16:34:43.951707    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:43.951810    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:43.951814    4412 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:34:44.025253    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-230000
	
	I0805 16:34:44.025268    4412 buildroot.go:166] provisioning hostname "running-upgrade-230000"
	I0805 16:34:44.025310    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.025416    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.025422    4412 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-230000 && echo "running-upgrade-230000" | sudo tee /etc/hostname
	I0805 16:34:44.102184    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-230000
	
	I0805 16:34:44.102234    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.102349    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.102357    4412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-230000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-230000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-230000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:34:44.172761    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:34:44.172772    4412 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1054/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1054/.minikube}
	I0805 16:34:44.172783    4412 buildroot.go:174] setting up certificates
	I0805 16:34:44.172787    4412 provision.go:84] configureAuth start
	I0805 16:34:44.172793    4412 provision.go:143] copyHostCerts
	I0805 16:34:44.172860    4412 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem, removing ...
	I0805 16:34:44.172865    4412 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem
	I0805 16:34:44.173002    4412 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem (1123 bytes)
	I0805 16:34:44.173206    4412 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem, removing ...
	I0805 16:34:44.173209    4412 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem
	I0805 16:34:44.173265    4412 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem (1675 bytes)
	I0805 16:34:44.173386    4412 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem, removing ...
	I0805 16:34:44.173389    4412 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem
	I0805 16:34:44.173439    4412 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem (1078 bytes)
	I0805 16:34:44.173546    4412 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-230000 san=[127.0.0.1 localhost minikube running-upgrade-230000]
	I0805 16:34:44.435682    4412 provision.go:177] copyRemoteCerts
	I0805 16:34:44.435737    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:34:44.435748    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:34:44.473521    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 16:34:44.480564    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 16:34:44.488479    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 16:34:44.499221    4412 provision.go:87] duration metric: took 326.4345ms to configureAuth
	I0805 16:34:44.499236    4412 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:34:44.499366    4412 config.go:182] Loaded profile config "running-upgrade-230000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:34:44.499401    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.499498    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.499502    4412 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:34:44.571612    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:34:44.571621    4412 buildroot.go:70] root file system type: tmpfs
	I0805 16:34:44.571673    4412 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:34:44.571718    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.571824    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.571861    4412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:34:44.648516    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:34:44.648562    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.648677    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.648685    4412 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:34:44.721742    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:34:44.721751    4412 machine.go:97] duration metric: took 770.091083ms to provisionDockerMachine
	I0805 16:34:44.721756    4412 start.go:293] postStartSetup for "running-upgrade-230000" (driver="qemu2")
	I0805 16:34:44.721762    4412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:34:44.721810    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:34:44.721819    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:34:44.760436    4412 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:34:44.761671    4412 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 16:34:44.761677    4412 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/addons for local assets ...
	I0805 16:34:44.761744    4412 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/files for local assets ...
	I0805 16:34:44.761856    4412 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem -> 15512.pem in /etc/ssl/certs
	I0805 16:34:44.761993    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:34:44.764866    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:34:44.771934    4412 start.go:296] duration metric: took 50.174459ms for postStartSetup
	I0805 16:34:44.771948    4412 fix.go:56] duration metric: took 832.791167ms for fixHost
	I0805 16:34:44.771983    4412 main.go:141] libmachine: Using SSH client type: native
	I0805 16:34:44.772081    4412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10044ea10] 0x100451270 <nil>  [] 0s} localhost 50250 <nil> <nil>}
	I0805 16:34:44.772089    4412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:34:44.845928    4412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900884.790172972
	
	I0805 16:34:44.845937    4412 fix.go:216] guest clock: 1722900884.790172972
	I0805 16:34:44.845941    4412 fix.go:229] Guest: 2024-08-05 16:34:44.790172972 -0700 PDT Remote: 2024-08-05 16:34:44.771949 -0700 PDT m=+0.933256960 (delta=18.223972ms)
	I0805 16:34:44.845959    4412 fix.go:200] guest clock delta is within tolerance: 18.223972ms
	I0805 16:34:44.845961    4412 start.go:83] releasing machines lock for "running-upgrade-230000", held for 906.815ms
	I0805 16:34:44.846026    4412 ssh_runner.go:195] Run: cat /version.json
	I0805 16:34:44.846036    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:34:44.846209    4412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:34:44.846228    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	W0805 16:34:44.846666    4412 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50357->127.0.0.1:50250: write: broken pipe
	I0805 16:34:44.846681    4412 retry.go:31] will retry after 148.787587ms: ssh: handshake failed: write tcp 127.0.0.1:50357->127.0.0.1:50250: write: broken pipe
	W0805 16:34:45.038869    4412 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 16:34:45.038936    4412 ssh_runner.go:195] Run: systemctl --version
	I0805 16:34:45.040798    4412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:34:45.042629    4412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:34:45.042655    4412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 16:34:45.045372    4412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 16:34:45.049842    4412 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:34:45.049851    4412 start.go:495] detecting cgroup driver to use...
	I0805 16:34:45.049921    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:34:45.055032    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 16:34:45.058106    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:34:45.061123    4412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:34:45.061148    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:34:45.064667    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:34:45.067678    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:34:45.070806    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:34:45.073597    4412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:34:45.076849    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:34:45.080278    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:34:45.083603    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:34:45.086565    4412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:34:45.089218    4412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:34:45.092332    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:34:45.180191    4412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:34:45.190755    4412 start.go:495] detecting cgroup driver to use...
	I0805 16:34:45.190820    4412 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:34:45.197274    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:34:45.202193    4412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:34:45.210216    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:34:45.214425    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:34:45.218846    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:34:45.223703    4412 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:34:45.224842    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:34:45.228116    4412 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:34:45.232710    4412 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:34:45.341004    4412 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:34:45.431777    4412 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:34:45.431838    4412 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:34:45.437427    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:34:45.525722    4412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:34:58.872306    4412 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.346836041s)
	I0805 16:34:58.872372    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:34:58.877326    4412 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:34:58.886713    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:34:58.891677    4412 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:34:58.966612    4412 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:34:59.064417    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:34:59.146011    4412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:34:59.152255    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:34:59.156541    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:34:59.231849    4412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:34:59.272085    4412 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:34:59.272162    4412 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:34:59.275357    4412 start.go:563] Will wait 60s for crictl version
	I0805 16:34:59.275400    4412 ssh_runner.go:195] Run: which crictl
	I0805 16:34:59.276769    4412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:34:59.288406    4412 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 16:34:59.288470    4412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:34:59.301310    4412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:34:59.324146    4412 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 16:34:59.324267    4412 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 16:34:59.325509    4412 kubeadm.go:883] updating cluster {Name:running-upgrade-230000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50282 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-230000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 16:34:59.325554    4412 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:34:59.325590    4412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:34:59.336289    4412 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:34:59.336298    4412 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:34:59.336343    4412 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:34:59.339443    4412 ssh_runner.go:195] Run: which lz4
	I0805 16:34:59.340809    4412 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 16:34:59.342039    4412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:34:59.342058    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 16:35:00.244633    4412 docker.go:649] duration metric: took 903.87225ms to copy over tarball
	I0805 16:35:00.244699    4412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:35:01.474722    4412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.230034375s)
	I0805 16:35:01.474735    4412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:35:01.490609    4412 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:35:01.493609    4412 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 16:35:01.498512    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:35:01.576072    4412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:35:02.777660    4412 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.201595791s)
	I0805 16:35:02.777745    4412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:35:02.789280    4412 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:35:02.789291    4412 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:35:02.789298    4412 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 16:35:02.793418    4412 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:35:02.795051    4412 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:35:02.797302    4412 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:35:02.797358    4412 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:35:02.799175    4412 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:35:02.799547    4412 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:35:02.800636    4412 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:35:02.800644    4412 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:35:02.801831    4412 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:35:02.802305    4412 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:35:02.802985    4412 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 16:35:02.802979    4412 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:35:02.804221    4412 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:35:02.804634    4412 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:35:02.805058    4412 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 16:35:02.806195    4412 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:35:03.198558    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:35:03.216888    4412 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 16:35:03.216915    4412 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:35:03.216972    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:35:03.221853    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:35:03.229975    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:35:03.233114    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0805 16:35:03.234414    4412 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 16:35:03.234523    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:35:03.236968    4412 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 16:35:03.236986    4412 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:35:03.237042    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:35:03.243326    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 16:35:03.246676    4412 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 16:35:03.246698    4412 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:35:03.246751    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:35:03.255643    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 16:35:03.267791    4412 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 16:35:03.267811    4412 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:35:03.267818    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 16:35:03.267864    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:35:03.267869    4412 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 16:35:03.267879    4412 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:35:03.267883    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 16:35:03.267904    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 16:35:03.273212    4412 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 16:35:03.273233    4412 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 16:35:03.273280    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 16:35:03.287804    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 16:35:03.287907    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 16:35:03.287926    4412 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:35:03.287968    4412 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:35:03.291837    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:35:03.293505    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 16:35:03.293603    4412 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 16:35:03.294228    4412 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 16:35:03.294238    4412 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 16:35:03.294245    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 16:35:03.294246    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 16:35:03.305269    4412 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 16:35:03.305269    4412 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 16:35:03.305289    4412 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:35:03.305298    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 16:35:03.305332    4412 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:35:03.341538    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 16:35:03.345881    4412 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 16:35:03.345895    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0805 16:35:03.444199    4412 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 16:35:03.444233    4412 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:35:03.444249    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0805 16:35:03.465318    4412 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 16:35:03.465424    4412 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:35:03.541216    4412 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 16:35:03.541233    4412 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 16:35:03.541247    4412 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:35:03.541308    4412 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:35:03.638968    4412 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:35:03.638983    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 16:35:03.932224    4412 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 16:35:03.932284    4412 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 16:35:03.932404    4412 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:35:03.935018    4412 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 16:35:03.935043    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 16:35:03.976363    4412 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:35:03.976390    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 16:35:04.213074    4412 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 16:35:04.213110    4412 cache_images.go:92] duration metric: took 1.423832958s to LoadCachedImages
	W0805 16:35:04.213162    4412 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0805 16:35:04.213168    4412 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 16:35:04.213213    4412 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-230000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-230000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:35:04.213275    4412 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:35:04.226557    4412 cni.go:84] Creating CNI manager for ""
	I0805 16:35:04.226570    4412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:35:04.226575    4412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:35:04.226584    4412 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-230000 NodeName:running-upgrade-230000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:35:04.226651    4412 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-230000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:35:04.226711    4412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 16:35:04.229701    4412 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:35:04.229732    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:35:04.232593    4412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 16:35:04.238273    4412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:35:04.243368    4412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 16:35:04.248955    4412 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 16:35:04.250227    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:35:04.329472    4412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:35:04.334142    4412 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000 for IP: 10.0.2.15
	I0805 16:35:04.334148    4412 certs.go:194] generating shared ca certs ...
	I0805 16:35:04.334157    4412 certs.go:226] acquiring lock for ca certs: {Name:mk07f84aa9f3d3ae10a769c730392685ad86b558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:35:04.334320    4412 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key
	I0805 16:35:04.334375    4412 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key
	I0805 16:35:04.334383    4412 certs.go:256] generating profile certs ...
	I0805 16:35:04.334453    4412 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.key
	I0805 16:35:04.334469    4412 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key.bfddaf4f
	I0805 16:35:04.334477    4412 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt.bfddaf4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 16:35:04.550389    4412 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt.bfddaf4f ...
	I0805 16:35:04.550401    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt.bfddaf4f: {Name:mk899a43764f58526f4e29e03f9180fcf9eab079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:35:04.552318    4412 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key.bfddaf4f ...
	I0805 16:35:04.552326    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key.bfddaf4f: {Name:mk3b3b22dee6a6f637226b7140c3d5e6fb55bfc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:35:04.552512    4412 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt.bfddaf4f -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt
	I0805 16:35:04.552679    4412 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key.bfddaf4f -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key
	I0805 16:35:04.552846    4412 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/proxy-client.key
	I0805 16:35:04.552991    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem (1338 bytes)
	W0805 16:35:04.553020    4412 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551_empty.pem, impossibly tiny 0 bytes
	I0805 16:35:04.553027    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:35:04.553055    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem (1078 bytes)
	I0805 16:35:04.553081    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:35:04.553106    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem (1675 bytes)
	I0805 16:35:04.553167    4412 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:35:04.553536    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:35:04.562421    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 16:35:04.569908    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:35:04.577599    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:35:04.585533    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 16:35:04.592946    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:35:04.599925    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:35:04.607553    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:35:04.614546    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /usr/share/ca-certificates/15512.pem (1708 bytes)
	I0805 16:35:04.621743    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:35:04.628911    4412 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem --> /usr/share/ca-certificates/1551.pem (1338 bytes)
	I0805 16:35:04.635949    4412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:35:04.642155    4412 ssh_runner.go:195] Run: openssl version
	I0805 16:35:04.643970    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15512.pem && ln -fs /usr/share/ca-certificates/15512.pem /etc/ssl/certs/15512.pem"
	I0805 16:35:04.647128    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15512.pem
	I0805 16:35:04.648483    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:55 /usr/share/ca-certificates/15512.pem
	I0805 16:35:04.648502    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15512.pem
	I0805 16:35:04.650221    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15512.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:35:04.652830    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:35:04.655942    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:35:04.657316    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:35:04.657335    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:35:04.659125    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:35:04.661805    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1551.pem && ln -fs /usr/share/ca-certificates/1551.pem /etc/ssl/certs/1551.pem"
	I0805 16:35:04.665003    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1551.pem
	I0805 16:35:04.666564    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:55 /usr/share/ca-certificates/1551.pem
	I0805 16:35:04.666582    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1551.pem
	I0805 16:35:04.668373    4412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1551.pem /etc/ssl/certs/51391683.0"
	I0805 16:35:04.671280    4412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:35:04.672837    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:35:04.674602    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:35:04.676347    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:35:04.678135    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:35:04.680076    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:35:04.681867    4412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:35:04.683613    4412 kubeadm.go:392] StartCluster: {Name:running-upgrade-230000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50282 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-230000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:35:04.683685    4412 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:35:04.694204    4412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:35:04.697396    4412 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:35:04.697404    4412 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:35:04.697430    4412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:35:04.700632    4412 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:04.700866    4412 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-230000" does not appear in /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:35:04.700920    4412 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1054/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-230000" cluster setting kubeconfig missing "running-upgrade-230000" context setting]
	I0805 16:35:04.701048    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:35:04.701686    4412 kapi.go:59] client config for running-upgrade-230000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017e3e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:35:04.702003    4412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:35:04.704964    4412 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-230000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 16:35:04.704971    4412 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:35:04.705014    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:35:04.716096    4412 docker.go:483] Stopping containers: [2d45caf5606e 80f859edee24 7269c8e22b9a d7d11be02070 8875e7fd4be2 9853811c4e2c 95ed370be695 e1b955358bac 3cf069d392e1 e1db204b999f ae27721df005 de09bd85fcfb 57d8ecd8abef]
	I0805 16:35:04.716170    4412 ssh_runner.go:195] Run: docker stop 2d45caf5606e 80f859edee24 7269c8e22b9a d7d11be02070 8875e7fd4be2 9853811c4e2c 95ed370be695 e1b955358bac 3cf069d392e1 e1db204b999f ae27721df005 de09bd85fcfb 57d8ecd8abef
	I0805 16:35:04.727292    4412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:35:04.819849    4412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:35:04.824332    4412 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug  5 23:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  5 23:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  5 23:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  5 23:34 /etc/kubernetes/scheduler.conf
	
	I0805 16:35:04.824364    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf
	I0805 16:35:04.828013    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:04.828042    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:35:04.831783    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf
	I0805 16:35:04.836217    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:04.836252    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:35:04.841701    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf
	I0805 16:35:04.850189    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:04.850248    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:35:04.856511    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf
	I0805 16:35:04.859454    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:04.859484    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:35:04.862634    4412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:35:04.865468    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:35:04.892291    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:35:05.344746    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:35:05.544888    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:35:05.568619    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:35:05.596955    4412 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:35:05.597027    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:06.100028    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:06.599108    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:06.603403    4412 api_server.go:72] duration metric: took 1.006469542s to wait for apiserver process to appear ...
	I0805 16:35:06.603411    4412 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:35:06.603420    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:11.605554    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:11.605628    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:16.606342    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:16.606423    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:21.607436    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:21.607540    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:26.609767    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:26.609911    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:31.611739    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:31.611869    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:36.614299    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:36.614385    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:41.616949    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:41.617043    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:46.617580    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:46.617650    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:51.620075    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:51.620155    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:35:56.622559    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:35:56.622632    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:01.625088    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:01.625162    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:06.626126    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:06.626492    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:06.662304    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:06.662447    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:06.683170    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:06.683261    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:06.697882    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:06.697962    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:06.710349    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:06.710425    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:06.727979    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:06.728040    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:06.738248    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:06.738321    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:06.748906    4412 logs.go:276] 0 containers: []
	W0805 16:36:06.748916    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:06.748965    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:06.759120    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:06.759137    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:06.759143    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:06.774206    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:06.774219    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:06.788523    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:06.788535    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:06.826094    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:06.826101    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:06.840107    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:06.840116    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:06.857600    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:06.857613    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:06.869632    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:06.869644    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:06.881167    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:06.881181    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:06.907562    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:06.907573    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:06.921470    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:06.921484    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:06.933804    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:06.933815    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:06.945490    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:06.945499    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:06.950382    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:06.950391    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:07.019556    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:07.019570    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:07.032202    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:07.032214    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:07.043708    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:07.043719    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:07.063264    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:07.063277    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:09.576323    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:14.578970    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:14.579409    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:14.616078    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:14.616215    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:14.637472    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:14.637575    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:14.652464    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:14.652564    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:14.669241    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:14.669318    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:14.679552    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:14.679610    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:14.690101    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:14.690165    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:14.700360    4412 logs.go:276] 0 containers: []
	W0805 16:36:14.700370    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:14.700424    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:14.710884    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:14.710906    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:14.710913    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:14.728253    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:14.728263    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:14.739245    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:14.739253    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:14.763244    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:14.763255    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:14.797840    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:14.797850    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:14.802051    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:14.802056    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:14.815578    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:14.815589    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:14.829868    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:14.829881    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:14.842675    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:14.842686    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:14.856583    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:14.856594    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:14.867848    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:14.867858    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:14.878809    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:14.878818    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:14.890429    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:14.890438    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:14.901868    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:14.901878    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:14.913467    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:14.913478    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:14.949657    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:14.949672    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:14.964143    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:14.964155    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:17.480311    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:22.482658    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:22.483145    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:22.535745    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:22.535868    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:22.553167    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:22.553248    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:22.567259    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:22.567333    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:22.579229    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:22.579300    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:22.589654    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:22.589729    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:22.599992    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:22.600058    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:22.610451    4412 logs.go:276] 0 containers: []
	W0805 16:36:22.610461    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:22.610514    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:22.621724    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:22.621740    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:22.621745    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:22.635719    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:22.635731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:22.650656    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:22.650668    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:22.661743    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:22.661754    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:22.678898    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:22.678910    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:22.693393    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:22.693405    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:22.717874    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:22.717882    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:22.731168    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:22.731180    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:22.735665    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:22.735674    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:22.750525    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:22.750535    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:22.768259    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:22.768270    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:22.780276    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:22.780284    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:22.791591    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:22.791600    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:22.827480    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:22.827491    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:22.861266    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:22.861278    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:22.874001    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:22.874014    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:22.887418    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:22.887428    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:25.401040    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:30.403789    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:30.404109    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:30.434314    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:30.434434    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:30.453407    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:30.453504    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:30.467321    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:30.467389    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:30.478765    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:30.478824    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:30.489528    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:30.489599    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:30.500785    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:30.500850    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:30.511269    4412 logs.go:276] 0 containers: []
	W0805 16:36:30.511279    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:30.511332    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:30.526467    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:30.526485    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:30.526492    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:30.538007    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:30.538017    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:30.553713    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:30.553722    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:30.565347    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:30.565358    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:30.576347    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:30.576356    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:30.581126    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:30.581133    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:30.593474    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:30.593485    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:30.607307    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:30.607320    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:30.652372    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:30.652383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:30.673761    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:30.673770    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:30.687797    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:30.687808    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:30.744361    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:30.744373    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:30.756189    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:30.756200    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:30.782143    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:30.782151    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:30.793426    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:30.793435    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:30.813659    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:30.813669    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:30.833152    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:30.833163    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:33.346700    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:38.349360    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:38.349758    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:38.391425    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:38.391552    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:38.412613    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:38.412718    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:38.430690    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:38.430775    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:38.442558    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:38.442637    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:38.453127    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:38.453200    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:38.463755    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:38.463829    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:38.474225    4412 logs.go:276] 0 containers: []
	W0805 16:36:38.474236    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:38.474296    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:38.487786    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:38.487801    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:38.487807    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:38.502054    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:38.502065    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:38.517296    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:38.517307    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:38.528763    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:38.528774    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:38.567715    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:38.567726    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:38.605181    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:38.605191    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:38.619207    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:38.619217    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:38.630057    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:38.630070    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:38.641755    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:38.641765    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:38.654401    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:38.654414    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:38.668441    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:38.668453    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:38.680887    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:38.680897    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:38.702959    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:38.702971    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:38.714837    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:38.714847    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:38.726253    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:38.726266    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:38.730432    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:38.730441    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:38.747121    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:38.747130    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:41.274969    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:46.277654    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:46.278098    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:46.316679    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:46.316809    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:46.338529    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:46.338640    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:46.354111    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:46.354194    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:46.366373    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:46.366445    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:46.377866    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:46.377925    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:46.388965    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:46.389031    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:46.399332    4412 logs.go:276] 0 containers: []
	W0805 16:36:46.399343    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:46.399396    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:46.409508    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:46.409525    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:46.409531    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:46.413852    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:46.413858    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:46.425156    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:46.425170    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:46.451228    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:46.451235    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:46.465236    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:46.465248    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:46.479785    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:46.479795    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:46.493788    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:46.493798    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:46.505437    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:46.505447    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:46.516881    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:46.516892    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:46.530709    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:46.530719    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:46.549684    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:46.549697    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:46.567094    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:46.567107    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:46.578570    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:46.578584    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:46.616102    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:46.616113    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:46.650506    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:46.650518    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:46.664336    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:46.664345    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:46.683608    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:46.683621    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:49.197893    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:36:54.200600    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:36:54.200969    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:36:54.241064    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:36:54.241197    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:36:54.262710    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:36:54.262817    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:36:54.280473    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:36:54.280544    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:36:54.292187    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:36:54.292258    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:36:54.303255    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:36:54.303326    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:36:54.314041    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:36:54.314108    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:36:54.324142    4412 logs.go:276] 0 containers: []
	W0805 16:36:54.324155    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:36:54.324213    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:36:54.334768    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:36:54.334785    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:36:54.334790    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:36:54.338915    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:36:54.338924    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:36:54.374121    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:36:54.374135    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:36:54.388795    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:36:54.388806    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:36:54.400914    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:36:54.400927    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:36:54.413002    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:36:54.413014    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:36:54.427752    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:36:54.427765    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:36:54.441220    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:36:54.441230    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:36:54.453006    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:36:54.453019    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:36:54.477162    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:36:54.477170    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:36:54.491403    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:36:54.491413    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:36:54.504328    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:36:54.504340    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:36:54.522776    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:36:54.522786    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:36:54.540393    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:36:54.540403    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:36:54.578197    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:36:54.578207    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:36:54.592651    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:36:54.592663    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:36:54.603956    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:36:54.603967    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:36:57.120427    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:02.122576    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:02.122752    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:02.141649    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:02.141740    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:02.155321    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:02.155394    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:02.167346    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:02.167410    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:02.177747    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:02.177810    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:02.188142    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:02.188198    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:02.198533    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:02.198601    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:02.208807    4412 logs.go:276] 0 containers: []
	W0805 16:37:02.208818    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:02.208867    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:02.219797    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:02.219813    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:02.219818    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:02.234596    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:02.234604    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:02.249426    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:02.249441    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:02.266638    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:02.266648    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:02.282192    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:02.282203    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:02.305809    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:02.305818    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:02.317310    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:02.317325    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:02.330353    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:02.330366    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:02.344358    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:02.344368    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:02.358536    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:02.358549    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:02.369243    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:02.369254    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:02.380432    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:02.380443    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:02.417676    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:02.417683    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:02.421902    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:02.421910    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:02.455458    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:02.455470    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:02.469389    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:02.469402    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:02.481693    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:02.481702    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:04.995047    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:09.997255    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:09.997522    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:10.039075    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:10.039185    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:10.062322    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:10.062404    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:10.077669    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:10.077742    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:10.090477    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:10.090557    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:10.102057    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:10.102122    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:10.113536    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:10.113601    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:10.124535    4412 logs.go:276] 0 containers: []
	W0805 16:37:10.124548    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:10.124599    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:10.134721    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:10.134739    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:10.134744    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:10.147144    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:10.147157    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:10.164962    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:10.164975    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:10.177184    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:10.177195    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:10.195421    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:10.195436    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:10.199965    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:10.199973    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:10.214951    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:10.214967    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:10.232003    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:10.232017    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:10.248586    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:10.248605    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:10.276710    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:10.276731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:10.290360    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:10.290372    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:10.303772    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:10.303785    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:10.317308    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:10.317323    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:10.332997    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:10.333009    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:10.347959    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:10.347972    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:10.362871    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:10.362885    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:10.400486    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:10.400498    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:12.945364    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:17.948104    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:17.948300    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:17.960212    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:17.960292    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:17.972467    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:17.972533    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:17.987388    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:17.987453    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:17.997619    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:17.997680    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:18.008323    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:18.008395    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:18.019014    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:18.019087    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:18.028881    4412 logs.go:276] 0 containers: []
	W0805 16:37:18.028891    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:18.028946    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:18.039594    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:18.039612    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:18.039617    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:18.056562    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:18.056574    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:18.078545    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:18.078557    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:18.097555    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:18.097565    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:18.109905    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:18.109920    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:18.114496    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:18.114502    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:18.126588    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:18.126602    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:18.152178    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:18.152190    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:18.163879    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:18.163890    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:18.188282    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:18.188290    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:18.224621    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:18.224627    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:18.260777    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:18.260788    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:18.272536    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:18.272548    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:18.286939    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:18.286954    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:18.300310    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:18.300321    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:18.314737    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:18.314747    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:18.329316    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:18.329327    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:20.845123    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:25.847379    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:25.847796    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:25.886663    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:25.886803    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:25.909092    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:25.909215    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:25.924094    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:25.924195    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:25.936287    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:25.936364    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:25.947628    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:25.947699    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:25.962574    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:25.962639    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:25.973140    4412 logs.go:276] 0 containers: []
	W0805 16:37:25.973157    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:25.973214    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:25.983725    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:25.983743    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:25.983749    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:26.019020    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:26.019029    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:26.031967    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:26.031978    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:26.046497    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:26.046509    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:26.058243    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:26.058253    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:26.069584    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:26.069595    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:26.080725    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:26.080735    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:26.085042    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:26.085051    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:26.120843    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:26.120854    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:26.137586    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:26.137599    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:26.148773    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:26.148784    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:26.163320    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:26.163332    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:26.188718    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:26.188726    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:26.202700    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:26.202713    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:26.216765    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:26.216777    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:26.234072    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:26.234083    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:26.245172    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:26.245184    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:28.757436    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:33.760144    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:33.760296    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:33.773459    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:33.773529    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:33.784762    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:33.784825    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:33.795268    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:33.795338    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:33.805506    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:33.805572    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:33.816083    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:33.816150    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:33.826701    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:33.826765    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:33.836495    4412 logs.go:276] 0 containers: []
	W0805 16:37:33.836505    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:33.836558    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:33.846674    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:33.846691    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:33.846697    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:33.881979    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:33.881989    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:33.895178    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:33.895189    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:33.906537    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:33.906550    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:33.920174    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:33.920183    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:33.931588    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:33.931601    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:33.943640    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:33.943652    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:33.948472    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:33.948478    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:33.962763    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:33.962775    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:33.980977    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:33.980988    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:33.995501    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:33.995514    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:34.032646    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:34.032654    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:34.046481    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:34.046491    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:34.063362    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:34.063372    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:34.075311    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:34.075321    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:34.086952    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:34.086962    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:34.105686    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:34.105696    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:36.632541    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:41.634754    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:41.635007    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:41.664470    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:41.664581    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:41.682271    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:41.682344    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:41.695843    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:41.695905    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:41.708175    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:41.708231    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:41.718554    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:41.718626    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:41.728902    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:41.728956    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:41.739772    4412 logs.go:276] 0 containers: []
	W0805 16:37:41.739786    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:41.739846    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:41.754774    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:41.754790    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:41.754796    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:41.759746    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:41.759752    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:41.771106    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:41.771118    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:41.782549    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:41.782560    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:41.796161    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:41.796172    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:41.831978    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:41.831988    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:41.846575    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:41.846587    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:41.859896    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:41.859910    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:41.877614    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:41.877626    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:41.889165    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:41.889177    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:41.924529    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:41.924542    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:41.937128    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:41.937138    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:41.951722    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:41.951732    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:41.966293    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:41.966305    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:41.978643    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:41.978657    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:41.999262    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:41.999272    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:42.013827    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:42.013837    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:44.541342    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:49.543752    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:49.544034    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:49.575636    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:49.575776    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:49.595146    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:49.595237    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:49.617918    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:49.617999    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:49.629692    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:49.629759    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:49.641921    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:49.641995    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:49.652786    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:49.652857    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:49.663205    4412 logs.go:276] 0 containers: []
	W0805 16:37:49.663218    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:49.663272    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:49.675687    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:49.675706    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:49.675712    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:37:49.691120    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:49.691129    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:49.703124    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:49.703135    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:49.746964    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:49.746973    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:49.761731    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:49.761745    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:49.774212    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:49.774224    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:49.786241    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:49.786257    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:49.799766    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:49.799778    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:49.811962    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:49.811973    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:49.816760    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:49.816766    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:49.831177    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:49.831187    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:49.843618    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:49.843632    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:49.857835    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:49.857845    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:49.872543    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:49.872553    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:49.889231    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:49.889243    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:49.906276    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:49.906287    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:49.941615    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:49.941622    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:52.466907    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:37:57.469471    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:37:57.469636    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:37:57.483864    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:37:57.483953    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:37:57.495282    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:37:57.495348    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:37:57.505714    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:37:57.505959    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:37:57.517291    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:37:57.517367    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:37:57.527830    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:37:57.527904    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:37:57.538560    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:37:57.538627    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:37:57.549021    4412 logs.go:276] 0 containers: []
	W0805 16:37:57.549033    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:37:57.549087    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:37:57.559479    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:37:57.559499    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:37:57.559506    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:37:57.597125    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:37:57.597134    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:37:57.601184    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:37:57.601190    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:37:57.615892    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:37:57.615907    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:37:57.629902    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:37:57.629917    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:37:57.641592    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:37:57.641603    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:37:57.658682    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:37:57.658695    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:37:57.670217    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:37:57.670231    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:37:57.693308    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:37:57.693316    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:37:57.704647    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:37:57.704657    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:37:57.739862    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:37:57.739873    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:37:57.754598    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:37:57.754610    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:37:57.766665    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:37:57.766676    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:37:57.777756    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:37:57.777767    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:37:57.791860    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:37:57.791874    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:37:57.806021    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:37:57.806034    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:37:57.817008    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:37:57.817021    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:00.336364    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:05.338453    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:05.338664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:05.364017    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:05.364135    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:05.400809    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:05.400881    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:05.414479    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:05.414546    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:05.429637    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:05.429712    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:05.441924    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:05.441991    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:05.452142    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:05.452204    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:05.463568    4412 logs.go:276] 0 containers: []
	W0805 16:38:05.463583    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:05.463636    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:05.473964    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:05.473982    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:05.473988    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:05.509197    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:05.509207    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:05.526853    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:05.526864    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:05.538696    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:05.538707    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:05.550699    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:05.550710    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:05.564970    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:05.564983    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:05.577370    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:05.577383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:05.588554    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:05.588565    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:05.600102    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:05.600113    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:05.618944    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:05.618955    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:05.630569    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:05.630579    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:05.668347    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:05.668360    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:05.673683    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:05.673693    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:05.687367    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:05.687379    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:05.708775    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:05.708785    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:05.720063    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:05.720074    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:05.738327    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:05.738339    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:08.262431    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:13.265101    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:13.265287    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:13.277508    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:13.277590    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:13.288588    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:13.288664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:13.299306    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:13.299374    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:13.313995    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:13.314065    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:13.324751    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:13.324819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:13.335673    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:13.335738    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:13.346335    4412 logs.go:276] 0 containers: []
	W0805 16:38:13.346345    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:13.346403    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:13.357197    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:13.357213    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:13.357218    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:13.371199    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:13.371209    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:13.386319    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:13.386330    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:13.398660    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:13.398672    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:13.410313    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:13.410324    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:13.435646    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:13.435654    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:13.440238    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:13.440244    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:13.480630    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:13.480643    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:13.493888    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:13.493899    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:13.510039    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:13.510051    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:13.521135    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:13.521146    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:13.534988    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:13.534998    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:13.546737    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:13.546749    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:13.558369    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:13.558380    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:13.570515    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:13.570525    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:13.607626    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:13.607635    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:13.629528    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:13.629538    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:16.149869    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:21.152126    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:21.152555    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:21.191444    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:21.191576    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:21.212539    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:21.212652    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:21.227193    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:21.227273    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:21.239715    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:21.239790    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:21.250595    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:21.250660    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:21.262119    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:21.262193    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:21.272629    4412 logs.go:276] 0 containers: []
	W0805 16:38:21.272640    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:21.272695    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:21.283280    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:21.283298    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:21.283303    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:21.295600    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:21.295610    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:21.310159    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:21.310168    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:21.325062    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:21.325075    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:21.348368    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:21.348375    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:21.361925    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:21.361937    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:21.398816    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:21.398824    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:21.433635    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:21.433649    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:21.445613    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:21.445626    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:21.457586    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:21.457598    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:21.468939    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:21.468951    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:21.481494    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:21.481506    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:21.501289    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:21.501300    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:21.505725    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:21.505731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:21.519642    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:21.519653    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:21.536406    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:21.536416    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:21.555072    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:21.555084    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:24.069264    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:29.070050    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:29.070155    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:29.083144    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:29.083218    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:29.094979    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:29.095050    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:29.106291    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:29.106380    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:29.118006    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:29.118079    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:29.129628    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:29.129702    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:29.145103    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:29.145179    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:29.156998    4412 logs.go:276] 0 containers: []
	W0805 16:38:29.157009    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:29.157076    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:29.168830    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:29.168849    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:29.168855    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:29.184642    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:29.184655    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:29.197823    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:29.197839    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:29.237015    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:29.237035    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:29.253079    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:29.253093    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:29.265632    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:29.265647    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:29.279451    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:29.279463    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:29.292069    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:29.292081    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:29.319105    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:29.319124    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:29.343909    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:29.343922    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:29.359751    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:29.359764    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:29.378148    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:29.378160    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:29.393386    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:29.393403    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:29.408601    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:29.408617    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:29.421894    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:29.421908    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:29.426964    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:29.426973    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:29.465255    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:29.465268    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:31.982478    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:36.984667    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:36.985137    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:37.023218    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:37.023344    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:37.044683    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:37.044783    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:37.063180    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:37.063256    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:37.075672    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:37.075745    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:37.086645    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:37.086715    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:37.098364    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:37.098434    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:37.108947    4412 logs.go:276] 0 containers: []
	W0805 16:38:37.108959    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:37.109022    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:37.120320    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:37.120341    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:37.120347    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:37.135345    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:37.135358    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:37.150287    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:37.150298    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:37.166188    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:37.166207    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:37.181056    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:37.181074    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:37.194348    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:37.194363    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:37.213250    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:37.213262    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:37.225198    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:37.225209    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:37.238881    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:37.238892    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:37.275737    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:37.275752    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:37.287670    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:37.287682    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:37.300454    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:37.300467    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:37.305485    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:37.305497    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:37.342399    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:37.342411    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:37.354545    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:37.354558    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:37.369891    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:37.369903    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:37.382637    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:37.382648    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:39.908365    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:44.910788    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:44.910928    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:44.922560    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:44.922633    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:44.936893    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:44.936966    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:44.950311    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:44.950378    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:44.961157    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:44.961221    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:44.972376    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:44.972445    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:44.983315    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:44.983382    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:44.993682    4412 logs.go:276] 0 containers: []
	W0805 16:38:44.993692    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:44.993742    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:45.004445    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:45.004462    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:45.004467    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:45.039521    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:45.039532    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:45.052725    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:45.052738    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:45.064964    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:45.064976    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:45.079474    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:45.079484    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:45.094429    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:45.094438    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:45.112703    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:45.112714    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:45.123596    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:45.123608    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:45.137847    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:45.137859    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:45.153548    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:45.153560    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:45.165670    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:45.165682    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:45.183702    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:45.183716    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:45.195451    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:45.195461    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:45.230616    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:45.230624    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:45.235104    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:45.235112    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:45.247239    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:45.247250    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:45.261940    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:45.261950    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:47.786900    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:52.789158    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:52.789316    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:52.808391    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:52.808486    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:52.822686    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:52.822768    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:52.834385    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:52.834454    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:52.845620    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:52.845684    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:52.855710    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:52.855778    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:52.868357    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:52.868421    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:52.878909    4412 logs.go:276] 0 containers: []
	W0805 16:38:52.878925    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:52.878984    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:52.889296    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:52.889314    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:52.889320    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:52.894265    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:52.894275    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:52.908185    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:52.908199    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:52.922832    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:52.922846    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:52.934663    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:52.934677    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:52.970951    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:52.970966    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:52.987479    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:52.987491    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:53.010625    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:53.010633    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:53.049398    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:53.049410    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:53.063532    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:53.063543    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:53.078215    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:53.078224    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:53.090025    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:53.090034    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:53.104526    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:53.104535    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:53.115777    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:53.115789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:53.132334    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:53.132343    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:53.147737    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:53.147751    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:53.165003    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:53.165012    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:55.679816    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:00.681923    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:00.682113    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:00.702264    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:39:00.702337    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:00.714980    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:39:00.715051    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:00.725945    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:39:00.726011    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:00.736611    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:39:00.736673    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:00.746909    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:39:00.746971    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:00.758324    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:39:00.758390    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:00.769464    4412 logs.go:276] 0 containers: []
	W0805 16:39:00.769475    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:00.769533    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:00.779878    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:39:00.779896    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:39:00.779904    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:39:00.794172    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:39:00.794183    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:39:00.807513    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:39:00.807527    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:39:00.822715    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:39:00.822725    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:39:00.841021    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:39:00.841031    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:00.853511    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:00.853522    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:00.891528    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:39:00.891539    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:39:00.905565    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:39:00.905575    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:39:00.916635    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:00.916646    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:00.939415    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:39:00.939425    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:39:00.961381    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:39:00.961392    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:39:00.972858    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:39:00.972868    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:39:00.984196    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:39:00.984206    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:39:00.998413    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:39:00.998423    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:39:01.010181    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:01.010189    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:01.015185    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:01.015191    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:01.051511    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:39:01.051523    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:39:03.567437    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:08.569935    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:08.569971    4412 kubeadm.go:597] duration metric: took 4m3.877478583s to restartPrimaryControlPlane
	W0805 16:39:08.570007    4412 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 16:39:08.570022    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 16:39:09.545410    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:39:09.550667    4412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:39:09.554338    4412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:39:09.557241    4412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:39:09.557247    4412 kubeadm.go:157] found existing configuration files:
	
	I0805 16:39:09.557272    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf
	I0805 16:39:09.559716    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:39:09.559742    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:39:09.562297    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf
	I0805 16:39:09.565264    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:39:09.565282    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:39:09.567976    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf
	I0805 16:39:09.570584    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:39:09.570606    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:39:09.573755    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf
	I0805 16:39:09.576528    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:39:09.576549    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:39:09.579145    4412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:39:09.598278    4412 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 16:39:09.598324    4412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:39:09.652665    4412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:39:09.652771    4412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:39:09.652829    4412 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:39:09.708170    4412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:39:09.712401    4412 out.go:204]   - Generating certificates and keys ...
	I0805 16:39:09.712441    4412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:39:09.712481    4412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:39:09.712531    4412 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:39:09.712566    4412 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 16:39:09.712606    4412 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:39:09.712636    4412 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 16:39:09.712670    4412 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 16:39:09.712702    4412 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:39:09.712748    4412 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:39:09.712792    4412 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:39:09.712819    4412 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 16:39:09.712849    4412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:39:09.846137    4412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:39:09.919921    4412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:39:09.981526    4412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:39:10.181933    4412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:39:10.216100    4412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:39:10.216470    4412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:39:10.216511    4412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:39:10.300750    4412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:39:10.304633    4412 out.go:204]   - Booting up control plane ...
	I0805 16:39:10.304682    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:39:10.304786    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:39:10.305794    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:39:10.306543    4412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:39:10.307367    4412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 16:39:14.309419    4412 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002014 seconds
	I0805 16:39:14.309534    4412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:39:14.312965    4412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:39:14.820996    4412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:39:14.821133    4412 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-230000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:39:15.325042    4412 kubeadm.go:310] [bootstrap-token] Using token: bac5b6.noii76sbj0s4yru1
	I0805 16:39:15.331298    4412 out.go:204]   - Configuring RBAC rules ...
	I0805 16:39:15.331366    4412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:39:15.331407    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:39:15.335878    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:39:15.336896    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:39:15.340145    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:39:15.341628    4412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:39:15.345926    4412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:39:15.515808    4412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:39:15.729768    4412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:39:15.730271    4412 kubeadm.go:310] 
	I0805 16:39:15.730303    4412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:39:15.730307    4412 kubeadm.go:310] 
	I0805 16:39:15.730346    4412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:39:15.730351    4412 kubeadm.go:310] 
	I0805 16:39:15.730363    4412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:39:15.730392    4412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:39:15.730415    4412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:39:15.730418    4412 kubeadm.go:310] 
	I0805 16:39:15.730456    4412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:39:15.730463    4412 kubeadm.go:310] 
	I0805 16:39:15.730494    4412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:39:15.730498    4412 kubeadm.go:310] 
	I0805 16:39:15.730526    4412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:39:15.730573    4412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:39:15.730619    4412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:39:15.730622    4412 kubeadm.go:310] 
	I0805 16:39:15.730666    4412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:39:15.730716    4412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:39:15.730720    4412 kubeadm.go:310] 
	I0805 16:39:15.730757    4412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bac5b6.noii76sbj0s4yru1 \
	I0805 16:39:15.730804    4412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 \
	I0805 16:39:15.730816    4412 kubeadm.go:310] 	--control-plane 
	I0805 16:39:15.730821    4412 kubeadm.go:310] 
	I0805 16:39:15.730855    4412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:39:15.730858    4412 kubeadm.go:310] 
	I0805 16:39:15.730892    4412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bac5b6.noii76sbj0s4yru1 \
	I0805 16:39:15.730941    4412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 
	I0805 16:39:15.731002    4412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:39:15.731010    4412 cni.go:84] Creating CNI manager for ""
	I0805 16:39:15.731018    4412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:39:15.735635    4412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 16:39:15.743516    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 16:39:15.746904    4412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 16:39:15.751813    4412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:39:15.751860    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:39:15.751860    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-230000 minikube.k8s.io/updated_at=2024_08_05T16_39_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=running-upgrade-230000 minikube.k8s.io/primary=true
	I0805 16:39:15.754851    4412 ops.go:34] apiserver oom_adj: -16
	I0805 16:39:15.807308    4412 kubeadm.go:1113] duration metric: took 55.487709ms to wait for elevateKubeSystemPrivileges
	I0805 16:39:15.807324    4412 kubeadm.go:394] duration metric: took 4m11.128774792s to StartCluster
	I0805 16:39:15.807334    4412 settings.go:142] acquiring lock: {Name:mk8f45924d83b23294fe6a7ba250768dbca87de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:39:15.807418    4412 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:39:15.807798    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:39:15.807992    4412 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:39:15.808001    4412 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:39:15.808034    4412 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-230000"
	I0805 16:39:15.808036    4412 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-230000"
	I0805 16:39:15.808051    4412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-230000"
	I0805 16:39:15.808065    4412 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-230000"
	W0805 16:39:15.808070    4412 addons.go:243] addon storage-provisioner should already be in state true
	I0805 16:39:15.808081    4412 host.go:66] Checking if "running-upgrade-230000" exists ...
	I0805 16:39:15.808086    4412 config.go:182] Loaded profile config "running-upgrade-230000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:39:15.808943    4412 kapi.go:59] client config for running-upgrade-230000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017e3e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:39:15.809067    4412 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-230000"
	W0805 16:39:15.809072    4412 addons.go:243] addon default-storageclass should already be in state true
	I0805 16:39:15.809080    4412 host.go:66] Checking if "running-upgrade-230000" exists ...
	I0805 16:39:15.812599    4412 out.go:177] * Verifying Kubernetes components...
	I0805 16:39:15.812944    4412 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:39:15.818779    4412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:39:15.818788    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:39:15.822579    4412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:39:15.826520    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:39:15.830545    4412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:39:15.830553    4412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:39:15.830559    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:39:15.920555    4412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:39:15.926819    4412 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:39:15.926876    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:39:15.930941    4412 api_server.go:72] duration metric: took 122.93975ms to wait for apiserver process to appear ...
	I0805 16:39:15.930949    4412 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:39:15.930955    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:15.938003    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:39:15.996797    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:39:20.932975    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:20.933003    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:25.933190    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:25.933212    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:30.933439    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:30.933482    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:35.934083    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:35.934101    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:40.934586    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:40.934645    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:45.935503    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:45.935550    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 16:39:46.261803    4412 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 16:39:46.265157    4412 out.go:177] * Enabled addons: storage-provisioner
	I0805 16:39:46.272985    4412 addons.go:510] duration metric: took 30.465595458s for enable addons: enabled=[storage-provisioner]
	I0805 16:39:50.936606    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:50.936676    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:55.937981    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:55.938054    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:00.939924    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:00.939968    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:05.941365    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:05.941412    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:10.943083    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:10.943137    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:15.945302    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:15.945395    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:15.956508    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:15.956574    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:15.966456    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:15.966524    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:15.977446    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:15.977513    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:15.989205    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:15.989274    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:15.999824    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:15.999903    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:16.010314    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:16.010382    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:16.020307    4412 logs.go:276] 0 containers: []
	W0805 16:40:16.020321    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:16.020383    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:16.031083    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:16.031098    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:16.031103    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:16.043841    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:16.043852    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:16.055855    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:16.055867    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:16.071236    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:16.071251    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:16.082966    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:16.082977    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:16.106318    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:16.106328    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:16.120847    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:16.120857    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:16.125595    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:16.125603    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:16.160753    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:16.160764    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:16.176314    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:16.176325    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:16.187838    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:16.187855    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:16.213162    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:16.213174    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:16.224645    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:16.224662    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:18.766069    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:23.768191    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:23.768337    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:23.780596    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:23.780666    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:23.790849    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:23.790921    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:23.801151    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:23.801220    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:23.811647    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:23.811717    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:23.821586    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:23.821650    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:23.831954    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:23.832020    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:23.842046    4412 logs.go:276] 0 containers: []
	W0805 16:40:23.842057    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:23.842114    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:23.856546    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:23.856560    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:23.856565    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:23.868266    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:23.868276    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:23.879823    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:23.879836    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:23.904479    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:23.904491    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:23.916209    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:23.916221    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:23.954627    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:23.954640    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:23.969345    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:23.969358    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:23.983801    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:23.983814    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:23.997903    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:23.997914    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:24.012459    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:24.012469    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:24.030296    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:24.030307    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:24.069698    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:24.069709    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:24.074838    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:24.074845    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:26.588226    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:31.590712    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:31.590803    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:31.601463    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:31.601536    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:31.619409    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:31.619473    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:31.630351    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:31.630428    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:31.644072    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:31.644133    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:31.658386    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:31.658464    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:31.670075    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:31.670147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:31.680685    4412 logs.go:276] 0 containers: []
	W0805 16:40:31.680696    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:31.680750    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:31.691163    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:31.691181    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:31.691187    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:31.729932    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:31.729941    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:31.734508    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:31.734514    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:31.772075    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:31.772086    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:31.786884    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:31.786895    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:31.802131    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:31.802142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:31.814153    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:31.814164    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:31.829820    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:31.829830    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:31.843555    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:31.843569    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:31.855324    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:31.855334    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:31.866935    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:31.866948    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:31.884289    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:31.884302    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:31.907881    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:31.907889    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:34.421786    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:39.424017    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:39.424125    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:39.435681    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:39.435754    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:39.447359    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:39.447429    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:39.458908    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:39.458983    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:39.470544    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:39.470664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:39.481224    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:39.481295    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:39.491913    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:39.491978    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:39.501714    4412 logs.go:276] 0 containers: []
	W0805 16:40:39.501725    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:39.501787    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:39.512327    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:39.512340    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:39.512345    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:39.523996    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:39.524009    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:39.535439    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:39.535449    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:39.539784    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:39.539790    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:39.576880    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:39.576890    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:39.590653    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:39.590668    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:39.602405    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:39.602419    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:39.617135    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:39.617148    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:39.635977    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:39.635987    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:39.647496    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:39.647506    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:39.672061    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:39.672068    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:39.711289    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:39.711307    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:39.725467    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:39.725511    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:42.239521    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:47.241774    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:47.241864    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:47.254189    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:47.254259    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:47.265287    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:47.265360    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:47.276872    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:47.276945    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:47.292537    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:47.292603    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:47.304072    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:47.304147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:47.319674    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:47.319747    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:47.330257    4412 logs.go:276] 0 containers: []
	W0805 16:40:47.330264    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:47.330323    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:47.341286    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:47.341301    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:47.341307    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:47.357630    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:47.357642    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:47.373693    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:47.373701    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:47.386545    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:47.386556    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:47.405784    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:47.405793    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:47.421380    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:47.421390    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:47.425995    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:47.426002    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:47.461640    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:47.461651    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:47.474308    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:47.474321    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:47.489532    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:47.489543    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:47.501412    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:47.501423    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:47.526762    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:47.526769    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:47.538593    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:47.538607    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:50.079807    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:55.081887    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:55.081998    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:55.093280    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:55.093345    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:55.104372    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:55.104438    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:55.115632    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:55.115703    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:55.127189    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:55.127257    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:55.139584    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:55.139655    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:55.151789    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:55.151855    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:55.163220    4412 logs.go:276] 0 containers: []
	W0805 16:40:55.163229    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:55.163283    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:55.174453    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:55.174466    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:55.174472    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:55.179043    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:55.179053    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:55.215493    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:55.215508    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:55.231489    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:55.231501    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:55.244705    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:55.244716    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:55.257426    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:55.257438    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:55.283469    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:55.283482    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:55.325763    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:55.325779    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:55.341916    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:55.341930    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:55.357643    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:55.357654    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:55.369307    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:55.369318    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:55.380468    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:55.380479    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:55.397825    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:55.397835    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:57.911043    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:02.912979    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:02.913065    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:02.924359    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:02.924434    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:02.936047    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:02.936127    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:02.951750    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:02.951819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:02.963566    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:02.963635    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:02.974892    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:02.974984    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:02.989039    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:02.989115    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:03.000297    4412 logs.go:276] 0 containers: []
	W0805 16:41:03.000307    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:03.000366    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:03.011815    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:03.011836    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:03.011843    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:03.024923    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:03.024934    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:03.050365    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:03.050375    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:03.054740    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:03.054746    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:03.074786    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:03.074801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:03.088631    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:03.088645    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:03.102411    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:03.102422    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:03.115297    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:03.115308    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:03.128133    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:03.128145    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:03.167595    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:03.167622    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:03.204888    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:03.204900    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:03.220297    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:03.220310    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:03.236106    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:03.236114    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:05.757516    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:10.759815    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:10.760204    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:10.789031    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:10.789157    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:10.807271    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:10.807364    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:10.821980    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:10.822062    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:10.834620    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:10.834696    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:10.846289    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:10.846365    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:10.858658    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:10.858732    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:10.871983    4412 logs.go:276] 0 containers: []
	W0805 16:41:10.871990    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:10.872019    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:10.883838    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:10.883850    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:10.883854    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:10.898910    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:10.898924    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:10.911605    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:10.911615    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:10.927776    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:10.927789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:10.946553    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:10.946563    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:10.959445    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:10.959456    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:10.985575    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:10.985591    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:10.990659    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:10.990669    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:11.028359    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:11.028372    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:11.043593    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:11.043606    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:11.056291    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:11.056304    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:11.076706    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:11.076718    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:11.089736    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:11.089744    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:13.633697    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:18.635958    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:18.636183    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:18.651035    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:18.651109    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:18.663230    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:18.663297    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:18.673632    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:18.673697    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:18.684213    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:18.684273    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:18.694792    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:18.694854    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:18.704904    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:18.704969    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:18.715529    4412 logs.go:276] 0 containers: []
	W0805 16:41:18.715539    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:18.715592    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:18.727938    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:18.727954    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:18.727960    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:18.733032    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:18.733039    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:18.748572    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:18.748583    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:18.761258    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:18.761268    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:18.777534    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:18.777547    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:18.803375    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:18.803389    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:18.815963    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:18.815975    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:18.829406    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:18.829417    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:18.870267    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:18.870277    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:18.908031    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:18.908045    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:18.923994    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:18.924005    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:18.937685    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:18.937695    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:18.950569    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:18.950584    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:21.471652    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:26.473784    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:26.474198    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:26.514149    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:26.514284    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:26.536434    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:26.536525    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:26.551838    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:26.551917    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:26.564546    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:26.564625    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:26.575778    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:26.575852    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:26.586776    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:26.586844    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:26.597378    4412 logs.go:276] 0 containers: []
	W0805 16:41:26.597389    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:26.597443    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:26.608532    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:26.608549    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:26.608558    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:26.654624    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:26.654635    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:26.670954    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:26.670969    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:26.697301    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:26.697313    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:26.738201    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:26.738210    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:26.743312    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:26.743328    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:26.756812    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:26.756824    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:26.770212    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:26.770226    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:26.783582    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:26.783592    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:26.802611    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:26.802624    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:26.815319    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:26.815332    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:26.827921    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:26.827933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:26.843412    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:26.843423    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:29.360020    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:34.362146    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:34.362280    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:34.381122    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:34.381196    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:34.392554    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:34.392620    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:34.407301    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:34.407374    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:34.417508    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:34.417575    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:34.427539    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:34.427606    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:34.437720    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:34.437784    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:34.447770    4412 logs.go:276] 0 containers: []
	W0805 16:41:34.447781    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:34.447836    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:34.458190    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:34.458215    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:34.458221    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:34.469226    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:34.469238    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:34.503834    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:34.503844    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:34.515387    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:34.515397    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:34.539187    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:34.539198    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:34.578538    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:34.578558    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:34.583342    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:34.583353    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:34.598393    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:34.598404    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:34.610853    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:34.610870    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:34.628102    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:34.628114    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:34.644063    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:34.644079    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:34.656954    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:34.656966    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:34.670370    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:34.670381    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:34.686263    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:34.686276    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:34.698049    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:34.698060    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:37.226524    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:42.228697    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:42.228841    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:42.240921    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:42.240994    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:42.257520    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:42.257589    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:42.268364    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:42.268433    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:42.278905    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:42.278975    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:42.289116    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:42.289181    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:42.299785    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:42.299853    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:42.310185    4412 logs.go:276] 0 containers: []
	W0805 16:41:42.310196    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:42.310249    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:42.320704    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:42.320725    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:42.320730    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:42.325621    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:42.325631    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:42.361282    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:42.361292    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:42.372781    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:42.372791    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:42.396106    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:42.396118    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:42.430411    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:42.430422    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:42.445499    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:42.445510    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:42.484837    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:42.484857    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:42.497934    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:42.497944    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:42.514398    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:42.514414    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:42.540269    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:42.540283    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:42.557283    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:42.557295    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:42.572076    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:42.572088    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:42.592337    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:42.592352    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:42.604364    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:42.604374    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:45.122739    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:50.125005    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:50.125147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:50.141318    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:50.141402    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:50.154303    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:50.154378    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:50.166438    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:50.166509    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:50.177182    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:50.177245    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:50.187447    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:50.187518    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:50.198031    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:50.198095    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:50.210970    4412 logs.go:276] 0 containers: []
	W0805 16:41:50.210980    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:50.211034    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:50.221516    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:50.221533    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:50.221538    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:50.233225    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:50.233239    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:50.244523    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:50.244536    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:50.281467    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:50.281489    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:50.296432    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:50.296447    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:50.308610    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:50.308619    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:50.323751    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:50.323762    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:50.335968    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:50.335978    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:50.347619    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:50.347630    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:50.361961    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:50.361971    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:50.382558    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:50.382568    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:50.400753    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:50.400766    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:50.426277    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:50.426287    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:50.431695    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:50.431706    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:50.471620    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:50.471628    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:52.987275    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:57.989610    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:57.989859    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:58.015896    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:58.016003    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:58.032462    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:58.032544    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:58.045769    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:58.045843    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:58.056417    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:58.056493    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:58.067379    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:58.067453    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:58.078055    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:58.078125    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:58.088091    4412 logs.go:276] 0 containers: []
	W0805 16:41:58.088101    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:58.088156    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:58.098767    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:58.098784    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:58.098789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:58.115885    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:58.115901    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:58.130731    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:58.130745    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:58.142866    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:58.142876    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:58.154558    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:58.154570    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:58.167157    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:58.167169    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:58.185681    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:58.185695    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:58.211056    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:58.211073    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:58.229558    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:58.229572    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:58.242715    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:58.242729    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:58.253864    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:58.253875    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:58.265821    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:58.265837    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:58.304717    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:58.304743    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:58.309697    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:58.309707    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:58.351691    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:58.351704    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:00.867350    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:05.869613    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:05.869819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:05.886968    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:05.887047    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:05.899556    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:05.899632    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:05.914351    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:05.914428    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:05.924600    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:05.924672    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:05.935420    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:05.935483    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:05.945868    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:05.945935    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:05.956203    4412 logs.go:276] 0 containers: []
	W0805 16:42:05.956219    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:05.956280    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:05.966856    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:05.966871    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:05.966876    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:05.988792    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:05.988801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:06.000533    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:06.000544    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:06.012069    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:06.012082    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:06.023633    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:06.023644    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:06.059487    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:06.059499    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:06.074042    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:06.074051    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:06.085204    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:06.085215    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:06.124142    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:06.124150    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:06.135864    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:06.135877    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:06.147858    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:06.147871    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:06.175670    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:06.175685    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:06.201507    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:06.201516    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:06.206829    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:06.206840    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:06.222033    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:06.222041    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:08.745154    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:13.747420    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:13.747635    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:13.765458    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:13.765546    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:13.778502    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:13.778576    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:13.790179    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:13.790243    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:13.800787    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:13.800854    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:13.811084    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:13.811137    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:13.822091    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:13.822160    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:13.832738    4412 logs.go:276] 0 containers: []
	W0805 16:42:13.832749    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:13.832801    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:13.843037    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:13.843056    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:13.843062    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:13.880523    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:13.880533    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:13.894917    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:13.894927    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:13.906805    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:13.906816    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:13.921370    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:13.921381    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:13.946191    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:13.946200    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:13.957872    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:13.957882    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:13.969410    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:13.969420    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:13.986576    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:13.986587    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:13.991156    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:13.991165    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:14.003224    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:14.003235    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:14.017959    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:14.017968    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:14.031736    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:14.031748    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:14.067561    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:14.067572    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:14.087574    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:14.087588    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:16.602471    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:21.604765    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:21.604987    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:21.617791    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:21.617876    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:21.629145    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:21.629209    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:21.639805    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:21.639880    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:21.650480    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:21.650553    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:21.661360    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:21.661430    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:21.676971    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:21.677043    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:21.693621    4412 logs.go:276] 0 containers: []
	W0805 16:42:21.693631    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:21.693691    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:21.704193    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:21.704210    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:21.704215    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:21.738778    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:21.738788    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:21.750809    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:21.750819    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:21.790204    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:21.790213    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:21.805871    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:21.805883    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:21.817791    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:21.817801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:21.829594    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:21.829603    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:21.852865    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:21.852873    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:21.864112    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:21.864122    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:21.868673    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:21.868680    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:21.882120    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:21.882130    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:21.893375    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:21.893383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:21.905037    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:21.905048    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:21.922653    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:21.922666    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:21.934374    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:21.934383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:24.452859    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:29.455026    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:29.455221    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:29.468269    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:29.468346    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:29.479526    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:29.479596    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:29.490466    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:29.490540    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:29.508239    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:29.508304    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:29.519118    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:29.519186    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:29.530659    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:29.530731    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:29.544647    4412 logs.go:276] 0 containers: []
	W0805 16:42:29.544658    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:29.544717    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:29.555529    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:29.555546    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:29.555551    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:29.580253    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:29.580262    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:29.617909    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:29.617922    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:29.633845    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:29.633862    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:29.645373    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:29.645386    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:29.659963    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:29.659972    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:29.677779    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:29.677791    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:29.689643    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:29.689652    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:29.701494    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:29.701505    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:29.706558    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:29.706564    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:29.721262    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:29.721275    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:29.738746    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:29.738759    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:29.750721    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:29.750731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:29.762422    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:29.762432    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:29.800816    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:29.800826    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:32.315507    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:37.317839    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:37.318214    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:37.351987    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:37.352111    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:37.373434    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:37.373529    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:37.387236    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:37.387315    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:37.401744    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:37.401815    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:37.412939    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:37.413009    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:37.427462    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:37.427529    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:37.437364    4412 logs.go:276] 0 containers: []
	W0805 16:42:37.437378    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:37.437439    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:37.447791    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:37.447808    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:37.447814    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:37.452227    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:37.452235    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:37.467085    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:37.467098    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:37.482023    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:37.482037    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:37.496131    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:37.496142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:37.509832    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:37.509846    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:37.521647    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:37.521657    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:37.562391    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:37.562402    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:37.598038    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:37.598049    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:37.610611    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:37.610622    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:37.625687    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:37.625698    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:37.643241    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:37.643251    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:37.654597    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:37.654606    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:37.679880    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:37.679891    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:37.691361    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:37.691370    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:40.205139    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:45.207263    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:45.207372    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:45.218311    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:45.218383    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:45.229278    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:45.229340    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:45.240908    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:45.240978    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:45.253898    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:45.253969    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:45.266469    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:45.266544    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:45.278473    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:45.278538    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:45.289872    4412 logs.go:276] 0 containers: []
	W0805 16:42:45.289883    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:45.289937    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:45.300983    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:45.300999    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:45.301004    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:45.313898    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:45.313909    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:45.329614    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:45.329626    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:45.344937    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:45.344950    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:45.360922    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:45.360933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:45.378532    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:45.378551    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:45.397523    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:45.397538    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:45.411250    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:45.411263    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:45.452196    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:45.452211    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:45.492511    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:45.492522    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:45.505625    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:45.505638    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:45.518738    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:45.518750    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:45.531841    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:45.531849    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:45.556703    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:45.556716    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:45.562370    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:45.562379    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:48.088924    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:53.091128    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:53.091242    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:53.103327    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:53.103390    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:53.113843    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:53.113912    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:53.127628    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:53.127704    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:53.137874    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:53.137938    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:53.148133    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:53.148206    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:53.160627    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:53.160691    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:53.171383    4412 logs.go:276] 0 containers: []
	W0805 16:42:53.171393    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:53.171451    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:53.181763    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:53.181784    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:53.181789    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:53.206408    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:53.206416    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:53.210719    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:53.210725    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:53.224989    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:53.224999    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:53.236560    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:53.236570    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:53.251812    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:53.251822    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:53.278495    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:53.278507    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:53.290752    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:53.290763    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:53.304851    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:53.304863    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:53.317197    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:53.317207    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:53.355584    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:53.355594    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:53.366904    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:53.366914    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:53.378942    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:53.378951    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:53.414131    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:53.414142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:53.426092    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:53.426103    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:55.939943    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:00.942006    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:00.942101    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:00.955013    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:43:00.955090    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:00.965995    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:43:00.966062    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:00.976459    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:43:00.976525    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:00.990423    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:43:00.990496    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:01.001059    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:43:01.001123    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:01.012185    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:43:01.012240    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:01.022341    4412 logs.go:276] 0 containers: []
	W0805 16:43:01.022356    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:01.022414    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:01.032907    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:43:01.032923    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:01.032927    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:01.037808    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:43:01.037816    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:43:01.055745    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:01.055759    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:01.080283    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:01.080293    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:01.119701    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:43:01.119711    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:43:01.131867    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:43:01.131878    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:43:01.147024    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:43:01.147035    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:43:01.164076    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:43:01.164086    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:43:01.177544    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:43:01.177553    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:43:01.189011    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:43:01.189022    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:43:01.200604    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:43:01.200617    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:43:01.211919    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:43:01.211930    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:43:01.223245    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:01.223255    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:01.260806    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:43:01.260818    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:43:01.273967    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:43:01.273979    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:03.788390    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:08.790521    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:08.790730    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:08.804071    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:43:08.804151    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:08.814950    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:43:08.815021    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:08.825814    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:43:08.825885    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:08.836377    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:43:08.836443    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:08.847318    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:43:08.847389    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:08.858085    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:43:08.858153    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:08.868652    4412 logs.go:276] 0 containers: []
	W0805 16:43:08.868663    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:08.868721    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:08.879803    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:43:08.879819    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:08.879824    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:08.948922    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:43:08.948933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:43:08.961368    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:08.961378    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:09.001651    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:43:09.001660    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:43:09.016551    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:43:09.016575    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:43:09.028548    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:43:09.028559    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:09.040628    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:09.040642    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:09.045459    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:43:09.045468    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:43:09.059513    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:43:09.059522    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:43:09.071023    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:43:09.071032    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:43:09.086647    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:09.086660    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:09.110636    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:43:09.110647    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:43:09.122478    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:43:09.122492    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:43:09.134345    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:43:09.134356    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:43:09.152084    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:43:09.152094    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:43:11.665474    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:16.666991    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:16.671578    4412 out.go:177] 
	W0805 16:43:16.675549    4412 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 16:43:16.675559    4412 out.go:239] * 
	* 
	W0805 16:43:16.676358    4412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:43:16.687419    4412 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-230000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-05 16:43:16.790445 -0700 PDT m=+3387.234754418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-230000 -n running-upgrade-230000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-230000 -n running-upgrade-230000: exit status 2 (15.628860042s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-230000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-939000          | force-systemd-flag-939000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-374000              | force-systemd-env-374000  | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-374000           | force-systemd-env-374000  | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	| start   | -p docker-flags-290000                | docker-flags-290000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-939000             | force-systemd-flag-939000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-939000          | force-systemd-flag-939000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	| start   | -p cert-expiration-035000             | cert-expiration-035000    | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-290000 ssh               | docker-flags-290000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-290000 ssh               | docker-flags-290000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-290000                | docker-flags-290000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	| start   | -p cert-options-906000                | cert-options-906000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-906000 ssh               | cert-options-906000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-906000 -- sudo        | cert-options-906000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-906000                | cert-options-906000       | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	| start   | -p running-upgrade-230000             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-230000             | running-upgrade-230000    | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-035000             | cert-expiration-035000    | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-035000             | cert-expiration-035000    | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT | 05 Aug 24 16:36 PDT |
	| start   | -p kubernetes-upgrade-967000          | kubernetes-upgrade-967000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-967000          | kubernetes-upgrade-967000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT | 05 Aug 24 16:37 PDT |
	| start   | -p kubernetes-upgrade-967000          | kubernetes-upgrade-967000 | jenkins | v1.33.1 | 05 Aug 24 16:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-967000          | kubernetes-upgrade-967000 | jenkins | v1.33.1 | 05 Aug 24 16:37 PDT | 05 Aug 24 16:37 PDT |
	| start   | -p stopped-upgrade-596000             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 16:37 PDT | 05 Aug 24 16:38 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-596000 stop           | minikube                  | jenkins | v1.26.0 | 05 Aug 24 16:38 PDT | 05 Aug 24 16:38 PDT |
	| start   | -p stopped-upgrade-596000             | stopped-upgrade-596000    | jenkins | v1.33.1 | 05 Aug 24 16:38 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:38:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:38:04.782446    4650 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:38:04.782593    4650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:38:04.782597    4650 out.go:304] Setting ErrFile to fd 2...
	I0805 16:38:04.782600    4650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:38:04.782777    4650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:38:04.783936    4650 out.go:298] Setting JSON to false
	I0805 16:38:04.804231    4650 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4055,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:38:04.804308    4650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:38:04.808976    4650 out.go:177] * [stopped-upgrade-596000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:38:04.816935    4650 notify.go:220] Checking for updates...
	I0805 16:38:04.821926    4650 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:38:04.829826    4650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:38:04.833873    4650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:38:04.836826    4650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:38:04.839856    4650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:38:04.842884    4650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:38:04.846111    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:38:04.848882    4650 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 16:38:04.851888    4650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:38:04.855833    4650 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:38:04.862889    4650 start.go:297] selected driver: qemu2
	I0805 16:38:04.862896    4650 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:04.862941    4650 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:38:04.865767    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:38:04.865783    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:38:04.865815    4650 start.go:340] cluster config:
	{Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:04.865864    4650 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:38:04.873890    4650 out.go:177] * Starting "stopped-upgrade-596000" primary control-plane node in "stopped-upgrade-596000" cluster
	I0805 16:38:04.877681    4650 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:38:04.877698    4650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 16:38:04.877707    4650 cache.go:56] Caching tarball of preloaded images
	I0805 16:38:04.877769    4650 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:38:04.877775    4650 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 16:38:04.877830    4650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/config.json ...
	I0805 16:38:04.878304    4650 start.go:360] acquireMachinesLock for stopped-upgrade-596000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:38:04.878331    4650 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "stopped-upgrade-596000"
	I0805 16:38:04.878339    4650 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:38:04.878346    4650 fix.go:54] fixHost starting: 
	I0805 16:38:04.878452    4650 fix.go:112] recreateIfNeeded on stopped-upgrade-596000: state=Stopped err=<nil>
	W0805 16:38:04.878462    4650 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:38:04.882864    4650 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-596000" ...
	I0805 16:38:05.338453    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:05.338664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:05.364017    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:05.364135    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:05.400809    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:05.400881    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:05.414479    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:05.414546    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:05.429637    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:05.429712    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:05.441924    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:05.441991    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:05.452142    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:05.452204    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:05.463568    4412 logs.go:276] 0 containers: []
	W0805 16:38:05.463583    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:05.463636    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:05.473964    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:05.473982    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:05.473988    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:05.509197    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:05.509207    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:05.526853    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:05.526864    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:05.538696    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:05.538707    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:05.550699    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:05.550710    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:05.564970    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:05.564983    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:05.577370    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:05.577383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:05.588554    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:05.588565    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:05.600102    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:05.600113    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:05.618944    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:05.618955    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:05.630569    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:05.630579    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:05.668347    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:05.668360    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:05.673683    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:05.673693    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:05.687367    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:05.687379    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:05.708775    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:05.708785    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:05.720063    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:05.720074    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:05.738327    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:05.738339    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:08.262431    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:04.890798    4650 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:38:04.890867    4650 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50468-:22,hostfwd=tcp::50469-:2376,hostname=stopped-upgrade-596000 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/disk.qcow2
	I0805 16:38:04.939397    4650 main.go:141] libmachine: STDOUT: 
	I0805 16:38:04.939423    4650 main.go:141] libmachine: STDERR: 
	I0805 16:38:04.939428    4650 main.go:141] libmachine: Waiting for VM to start (ssh -p 50468 docker@127.0.0.1)...
	I0805 16:38:13.265101    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:13.265287    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:13.277508    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:13.277590    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:13.288588    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:13.288664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:13.299306    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:13.299374    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:13.313995    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:13.314065    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:13.324751    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:13.324819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:13.335673    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:13.335738    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:13.346335    4412 logs.go:276] 0 containers: []
	W0805 16:38:13.346345    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:13.346403    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:13.357197    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:13.357213    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:13.357218    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:13.371199    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:13.371209    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:13.386319    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:13.386330    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:13.398660    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:13.398672    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:13.410313    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:13.410324    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:13.435646    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:13.435654    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:13.440238    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:13.440244    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:13.480630    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:13.480643    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:13.493888    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:13.493899    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:13.510039    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:13.510051    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:13.521135    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:13.521146    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:13.534988    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:13.534998    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:13.546737    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:13.546749    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:13.558369    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:13.558380    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:13.570515    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:13.570525    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:13.607626    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:13.607635    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:13.629528    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:13.629538    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:16.149869    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:21.152126    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:21.152555    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:21.191444    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:21.191576    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:21.212539    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:21.212652    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:21.227193    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:21.227273    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:21.239715    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:21.239790    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:21.250595    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:21.250660    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:21.262119    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:21.262193    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:21.272629    4412 logs.go:276] 0 containers: []
	W0805 16:38:21.272640    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:21.272695    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:21.283280    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:21.283298    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:21.283303    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:21.295600    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:21.295610    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:21.310159    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:21.310168    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:21.325062    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:21.325075    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:21.348368    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:21.348375    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:21.361925    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:21.361937    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:21.398816    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:21.398824    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:21.433635    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:21.433649    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:21.445613    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:21.445626    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:21.457586    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:21.457598    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:21.468939    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:21.468951    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:21.481494    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:21.481506    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:21.501289    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:21.501300    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:21.505725    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:21.505731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:21.519642    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:21.519653    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:21.536406    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:21.536416    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:21.555072    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:21.555084    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:24.740138    4650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/config.json ...
	I0805 16:38:24.740807    4650 machine.go:94] provisionDockerMachine start ...
	I0805 16:38:24.740980    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.741467    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.741481    4650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:38:24.835294    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:38:24.835328    4650 buildroot.go:166] provisioning hostname "stopped-upgrade-596000"
	I0805 16:38:24.835454    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.835713    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.835725    4650 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-596000 && echo "stopped-upgrade-596000" | sudo tee /etc/hostname
	I0805 16:38:24.924580    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-596000
	
	I0805 16:38:24.924693    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.924910    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.924923    4650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:38:25.001144    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:38:25.001160    4650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1054/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1054/.minikube}
	I0805 16:38:25.001171    4650 buildroot.go:174] setting up certificates
	I0805 16:38:25.001177    4650 provision.go:84] configureAuth start
	I0805 16:38:25.001187    4650 provision.go:143] copyHostCerts
	I0805 16:38:25.001282    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem, removing ...
	I0805 16:38:25.001290    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem
	I0805 16:38:25.001522    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem (1078 bytes)
	I0805 16:38:25.001775    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem, removing ...
	I0805 16:38:25.001785    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem
	I0805 16:38:25.001843    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem (1123 bytes)
	I0805 16:38:25.001990    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem, removing ...
	I0805 16:38:25.001994    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem
	I0805 16:38:25.002058    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem (1675 bytes)
	I0805 16:38:25.002201    4650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-596000 san=[127.0.0.1 localhost minikube stopped-upgrade-596000]
	I0805 16:38:25.102466    4650 provision.go:177] copyRemoteCerts
	I0805 16:38:25.102505    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:38:25.102514    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.139082    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:38:25.145648    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 16:38:25.152248    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 16:38:25.159446    4650 provision.go:87] duration metric: took 158.267167ms to configureAuth
	I0805 16:38:25.159455    4650 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:38:25.159551    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:38:25.159589    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.159679    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.159684    4650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:38:25.227268    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:38:25.227276    4650 buildroot.go:70] root file system type: tmpfs
	I0805 16:38:25.227336    4650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:38:25.227375    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.227476    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.227509    4650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:38:25.297968    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:38:25.298029    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.298142    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.298153    4650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:38:25.679368    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:38:25.679381    4650 machine.go:97] duration metric: took 938.583916ms to provisionDockerMachine
	I0805 16:38:25.679388    4650 start.go:293] postStartSetup for "stopped-upgrade-596000" (driver="qemu2")
	I0805 16:38:25.679395    4650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:38:25.679454    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:38:25.679463    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.715984    4650 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:38:25.717395    4650 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 16:38:25.717402    4650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/addons for local assets ...
	I0805 16:38:25.717484    4650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/files for local assets ...
	I0805 16:38:25.717590    4650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem -> 15512.pem in /etc/ssl/certs
	I0805 16:38:25.717690    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:38:25.720157    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:38:25.727401    4650 start.go:296] duration metric: took 48.008542ms for postStartSetup
	I0805 16:38:25.727413    4650 fix.go:56] duration metric: took 20.849489625s for fixHost
	I0805 16:38:25.727448    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.727549    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.727556    4650 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:38:25.794082    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901106.066039962
	
	I0805 16:38:25.794090    4650 fix.go:216] guest clock: 1722901106.066039962
	I0805 16:38:25.794094    4650 fix.go:229] Guest: 2024-08-05 16:38:26.066039962 -0700 PDT Remote: 2024-08-05 16:38:25.727415 -0700 PDT m=+20.977551918 (delta=338.624962ms)
	I0805 16:38:25.794105    4650 fix.go:200] guest clock delta is within tolerance: 338.624962ms
	I0805 16:38:25.794108    4650 start.go:83] releasing machines lock for "stopped-upgrade-596000", held for 20.916194167s
	I0805 16:38:25.794180    4650 ssh_runner.go:195] Run: cat /version.json
	I0805 16:38:25.794180    4650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:38:25.794187    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.794200    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	W0805 16:38:25.794706    4650 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50468: connect: connection refused
	I0805 16:38:25.794726    4650 retry.go:31] will retry after 362.009177ms: dial tcp [::1]:50468: connect: connection refused
	W0805 16:38:26.193668    4650 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 16:38:26.193731    4650 ssh_runner.go:195] Run: systemctl --version
	I0805 16:38:26.195598    4650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:38:26.197302    4650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:38:26.197330    4650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 16:38:26.200180    4650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 16:38:26.204890    4650 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:38:26.204902    4650 start.go:495] detecting cgroup driver to use...
	I0805 16:38:26.204982    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:26.211920    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 16:38:26.215479    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:38:26.219000    4650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:38:26.219027    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:38:26.222068    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:26.224915    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:38:26.227958    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:26.231417    4650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:38:26.234856    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:38:26.238119    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:38:26.240892    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:38:26.243999    4650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:38:26.247285    4650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:38:26.250232    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:26.335066    4650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:38:26.341041    4650 start.go:495] detecting cgroup driver to use...
	I0805 16:38:26.341092    4650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:38:26.348715    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:26.353971    4650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:38:26.360446    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:26.365283    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:26.369580    4650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:38:26.411359    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:26.416868    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:26.422657    4650 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:38:26.423857    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:38:26.426560    4650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:38:26.431420    4650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:38:26.512927    4650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:38:26.590094    4650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:38:26.590155    4650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:38:26.595496    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:26.674171    4650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:38:27.803722    4650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.129558083s)
	I0805 16:38:27.803782    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:38:27.808256    4650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:38:27.814547    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:38:27.818935    4650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:38:27.892426    4650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:38:27.975047    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:28.054977    4650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:38:28.061197    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:38:28.065977    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:28.151634    4650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:38:28.191148    4650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:38:28.191233    4650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:38:28.193372    4650 start.go:563] Will wait 60s for crictl version
	I0805 16:38:28.193397    4650 ssh_runner.go:195] Run: which crictl
	I0805 16:38:28.194707    4650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:38:28.208770    4650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 16:38:28.208842    4650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:38:28.224739    4650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:38:24.069264    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:28.243794    4650 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 16:38:28.243857    4650 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 16:38:28.245216    4650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:38:28.248640    4650 kubeadm.go:883] updating cluster {Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 16:38:28.248682    4650 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:38:28.248726    4650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:38:28.258927    4650 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:38:28.258949    4650 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:38:28.259000    4650 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:38:28.262196    4650 ssh_runner.go:195] Run: which lz4
	I0805 16:38:28.263420    4650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:38:28.264568    4650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:38:28.264577    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 16:38:29.186534    4650 docker.go:649] duration metric: took 923.157792ms to copy over tarball
	I0805 16:38:29.186616    4650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:38:29.070050    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:29.070155    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:29.083144    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:29.083218    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:29.094979    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:29.095050    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:29.106291    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:29.106380    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:29.118006    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:29.118079    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:29.129628    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:29.129702    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:29.145103    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:29.145179    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:29.156998    4412 logs.go:276] 0 containers: []
	W0805 16:38:29.157009    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:29.157076    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:29.168830    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:29.168849    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:29.168855    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:29.184642    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:29.184655    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:29.197823    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:29.197839    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:29.237015    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:29.237035    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:29.253079    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:29.253093    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:29.265632    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:29.265647    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:29.279451    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:29.279463    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:29.292069    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:29.292081    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:29.319105    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:29.319124    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:29.343909    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:29.343922    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:29.359751    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:29.359764    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:29.378148    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:29.378160    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:29.393386    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:29.393403    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:29.408601    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:29.408617    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:29.421894    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:29.421908    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:29.426964    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:29.426973    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:29.465255    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:29.465268    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:31.982478    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:30.355542    4650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168935416s)
	I0805 16:38:30.355555    4650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:38:30.371415    4650 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:38:30.375098    4650 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 16:38:30.380250    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:30.457444    4650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:38:32.066011    4650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608580833s)
	I0805 16:38:32.066104    4650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:38:32.082294    4650 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:38:32.082308    4650 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:38:32.082314    4650 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 16:38:32.086312    4650 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.087985    4650 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.089635    4650 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.089912    4650 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.090928    4650 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.091080    4650 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.092468    4650 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.093828    4650 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.093920    4650 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.094189    4650 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.094984    4650 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 16:38:32.095254    4650 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.096216    4650 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.096240    4650 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.097058    4650 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 16:38:32.097703    4650 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.533899    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.548102    4650 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 16:38:32.548124    4650 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.548176    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.549228    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.549758    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.552466    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.558112    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.566379    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 16:38:32.566431    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 16:38:32.570522    4650 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 16:38:32.570541    4650 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.570586    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.580561    4650 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 16:38:32.580582    4650 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.580638    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0805 16:38:32.588811    4650 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 16:38:32.588956    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.593239    4650 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 16:38:32.593254    4650 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 16:38:32.593261    4650 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.593264    4650 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.593314    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.593318    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.593439    4650 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 16:38:32.593450    4650 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 16:38:32.593467    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 16:38:32.620683    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 16:38:32.621983    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 16:38:32.623564    4650 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 16:38:32.623579    4650 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.623621    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.633191    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 16:38:32.633857    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 16:38:32.633970    4650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:38:32.634007    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 16:38:32.634066    4650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0805 16:38:32.642913    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 16:38:32.642935    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 16:38:32.642945    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 16:38:32.642944    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 16:38:32.643005    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 16:38:32.643098    4650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:38:32.650029    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 16:38:32.650060    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 16:38:32.665197    4650 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 16:38:32.665218    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 16:38:32.705564    4650 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 16:38:32.705684    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.756952    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 16:38:32.761901    4650 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:38:32.761913    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 16:38:32.773876    4650 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 16:38:32.773905    4650 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.773971    4650 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.897425    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 16:38:32.897466    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 16:38:32.897595    4650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:38:32.904979    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 16:38:32.905009    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 16:38:32.980813    4650 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:38:32.980827    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 16:38:33.279471    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 16:38:33.279496    4650 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:38:33.279507    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 16:38:33.411914    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 16:38:33.411962    4650 cache_images.go:92] duration metric: took 1.329666875s to LoadCachedImages
	W0805 16:38:33.411998    4650 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0805 16:38:33.412003    4650 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 16:38:33.412058    4650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-596000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:38:33.412132    4650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:38:33.425921    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:38:33.425934    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:38:33.425939    4650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:38:33.425948    4650 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-596000 NodeName:stopped-upgrade-596000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:38:33.426012    4650 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-596000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:38:33.426075    4650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 16:38:33.428858    4650 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:38:33.428888    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:38:33.431636    4650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 16:38:33.436816    4650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:38:33.441447    4650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 16:38:33.446322    4650 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 16:38:33.447527    4650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:38:33.451385    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:33.530259    4650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:38:33.537775    4650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000 for IP: 10.0.2.15
	I0805 16:38:33.537784    4650 certs.go:194] generating shared ca certs ...
	I0805 16:38:33.537794    4650 certs.go:226] acquiring lock for ca certs: {Name:mk07f84aa9f3d3ae10a769c730392685ad86b558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.537965    4650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key
	I0805 16:38:33.538000    4650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key
	I0805 16:38:33.538005    4650 certs.go:256] generating profile certs ...
	I0805 16:38:33.538069    4650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key
	I0805 16:38:33.538092    4650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175
	I0805 16:38:33.538100    4650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 16:38:33.714823    4650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 ...
	I0805 16:38:33.714835    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175: {Name:mkc5d234715702d6ad60be3acf11728f83485ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.715115    4650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175 ...
	I0805 16:38:33.715120    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175: {Name:mk1581b20ad59d081720986c583c873b86ece9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.715265    4650 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt
	I0805 16:38:33.715405    4650 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key
	I0805 16:38:33.715574    4650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.key
	I0805 16:38:33.715704    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem (1338 bytes)
	W0805 16:38:33.715730    4650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551_empty.pem, impossibly tiny 0 bytes
	I0805 16:38:33.715735    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:38:33.715760    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem (1078 bytes)
	I0805 16:38:33.715780    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:38:33.715797    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem (1675 bytes)
	I0805 16:38:33.715836    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:38:33.716173    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:38:33.723610    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 16:38:33.731038    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:38:33.738654    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:38:33.745911    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 16:38:33.753029    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:38:33.759890    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:38:33.767170    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:38:33.774577    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /usr/share/ca-certificates/15512.pem (1708 bytes)
	I0805 16:38:33.781129    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:38:33.787984    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem --> /usr/share/ca-certificates/1551.pem (1338 bytes)
	I0805 16:38:33.795226    4650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:38:33.800372    4650 ssh_runner.go:195] Run: openssl version
	I0805 16:38:33.802469    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:38:33.805420    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.806797    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.806817    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.808743    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:38:33.811917    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1551.pem && ln -fs /usr/share/ca-certificates/1551.pem /etc/ssl/certs/1551.pem"
	I0805 16:38:33.815257    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.816793    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:55 /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.816810    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.818600    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1551.pem /etc/ssl/certs/51391683.0"
	I0805 16:38:33.821489    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15512.pem && ln -fs /usr/share/ca-certificates/15512.pem /etc/ssl/certs/15512.pem"
	I0805 16:38:33.824462    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.825880    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:55 /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.825897    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.827607    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15512.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:38:33.831001    4650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:38:33.832621    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:38:33.834482    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:38:33.836318    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:38:33.838213    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:38:33.840015    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:38:33.841849    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:38:33.843636    4650 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:33.843706    4650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:38:33.854210    4650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:38:33.857361    4650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:38:33.857367    4650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:38:33.857393    4650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:38:33.860277    4650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:38:33.860569    4650 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-596000" does not appear in /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:38:33.860675    4650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1054/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-596000" cluster setting kubeconfig missing "stopped-upgrade-596000" context setting]
	I0805 16:38:33.860862    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.861586    4650 kapi.go:59] client config for stopped-upgrade-596000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a97e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:38:33.861919    4650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:38:33.864499    4650 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-596000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 16:38:33.864503    4650 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:38:33.864542    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:38:33.875176    4650 docker.go:483] Stopping containers: [4ac4a306b9cd cb1264009016 671b0bb9cd73 846a2455089c e42b40032b59 e5542e7cf8f0 9dfece4a698f 1ab90127fa79 8d6468f134fc]
	I0805 16:38:33.875247    4650 ssh_runner.go:195] Run: docker stop 4ac4a306b9cd cb1264009016 671b0bb9cd73 846a2455089c e42b40032b59 e5542e7cf8f0 9dfece4a698f 1ab90127fa79 8d6468f134fc
	I0805 16:38:33.886005    4650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:38:33.891359    4650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:38:33.894451    4650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:38:33.894456    4650 kubeadm.go:157] found existing configuration files:
	
	I0805 16:38:33.894476    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0805 16:38:33.896990    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:38:33.897008    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:38:33.899647    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0805 16:38:33.902498    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:38:33.902517    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:38:33.905120    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0805 16:38:33.907625    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:38:33.907656    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:38:33.910631    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0805 16:38:33.913271    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:38:33.913300    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:38:33.915873    4650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:38:33.918769    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:33.941353    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.499681    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.634403    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.663968    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.693072    4650 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:38:34.693149    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:36.984667    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:36.985137    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:37.023218    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:37.023344    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:37.044683    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:37.044783    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:37.063180    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:37.063256    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:37.075672    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:37.075745    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:37.086645    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:37.086715    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:37.098364    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:37.098434    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:37.108947    4412 logs.go:276] 0 containers: []
	W0805 16:38:37.108959    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:37.109022    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:37.120320    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:37.120341    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:37.120347    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:37.135345    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:37.135358    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:37.150287    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:37.150298    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:37.166188    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:37.166207    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:37.181056    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:37.181074    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:37.194348    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:37.194363    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:37.213250    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:37.213262    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:37.225198    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:37.225209    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:37.238881    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:37.238892    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:37.275737    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:37.275752    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:37.287670    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:37.287682    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:37.300454    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:37.300467    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:37.305485    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:37.305497    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:37.342399    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:37.342411    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:37.354545    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:37.354558    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:37.369891    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:37.369903    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:37.382637    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:37.382648    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:35.195319    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:35.695185    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:35.699394    4650 api_server.go:72] duration metric: took 1.006343541s to wait for apiserver process to appear ...
	I0805 16:38:35.699404    4650 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:38:35.699413    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:39.908365    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:40.701384    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:40.701410    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:44.910788    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:44.910928    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:44.922560    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:44.922633    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:44.936893    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:44.936966    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:44.950311    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:44.950378    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:44.961157    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:44.961221    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:44.972376    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:44.972445    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:44.983315    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:44.983382    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:44.993682    4412 logs.go:276] 0 containers: []
	W0805 16:38:44.993692    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:44.993742    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:45.004445    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:45.004462    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:45.004467    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:45.039521    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:45.039532    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:45.052725    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:45.052738    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:45.064964    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:45.064976    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:45.079474    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:45.079484    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:45.094429    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:45.094438    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:45.112703    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:45.112714    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:45.123596    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:45.123608    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:45.137847    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:45.137859    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:45.153548    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:45.153560    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:45.165670    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:45.165682    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:45.183702    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:45.183716    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:45.195451    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:45.195461    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:45.230616    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:45.230624    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:45.235104    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:45.235112    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:45.247239    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:45.247250    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:45.261940    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:45.261950    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:47.786900    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:45.701533    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:45.701558    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:52.789158    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:52.789316    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:38:52.808391    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:38:52.808486    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:38:52.822686    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:38:52.822768    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:38:52.834385    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:38:52.834454    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:38:52.845620    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:38:52.845684    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:38:52.855710    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:38:52.855778    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:38:52.868357    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:38:52.868421    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:38:52.878909    4412 logs.go:276] 0 containers: []
	W0805 16:38:52.878925    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:38:52.878984    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:38:52.889296    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:38:52.889314    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:38:52.889320    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:38:52.894265    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:38:52.894275    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:38:52.908185    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:38:52.908199    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:38:52.922832    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:38:52.922846    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:38:52.934663    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:38:52.934677    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:38:52.970951    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:38:52.970966    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:38:52.987479    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:38:52.987491    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:38:53.010625    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:38:53.010633    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:38:53.049398    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:38:53.049410    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:38:53.063532    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:38:53.063543    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:38:53.078215    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:38:53.078224    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:38:53.090025    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:38:53.090034    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:38:53.104526    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:38:53.104535    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:38:53.115777    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:38:53.115789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:38:53.132334    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:38:53.132343    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:38:53.147737    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:38:53.147751    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:38:53.165003    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:38:53.165012    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:38:50.701763    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:50.701804    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:55.679816    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:55.702170    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:55.702208    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:00.681923    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:00.682113    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:00.702264    4412 logs.go:276] 2 containers: [6885e196a1ab d7d11be02070]
	I0805 16:39:00.702337    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:00.714980    4412 logs.go:276] 2 containers: [a1f2b8584f23 e1b955358bac]
	I0805 16:39:00.715051    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:00.725945    4412 logs.go:276] 1 containers: [390a1bba5579]
	I0805 16:39:00.726011    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:00.736611    4412 logs.go:276] 2 containers: [180b0af01672 e1db204b999f]
	I0805 16:39:00.736673    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:00.746909    4412 logs.go:276] 1 containers: [11b60b1da135]
	I0805 16:39:00.746971    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:00.758324    4412 logs.go:276] 2 containers: [5364974de92f 8875e7fd4be2]
	I0805 16:39:00.758390    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:00.769464    4412 logs.go:276] 0 containers: []
	W0805 16:39:00.769475    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:00.769533    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:00.779878    4412 logs.go:276] 2 containers: [aab25dc371c9 4512d4f44bed]
	I0805 16:39:00.779896    4412 logs.go:123] Gathering logs for storage-provisioner [4512d4f44bed] ...
	I0805 16:39:00.779904    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4512d4f44bed"
	I0805 16:39:00.794172    4412 logs.go:123] Gathering logs for etcd [a1f2b8584f23] ...
	I0805 16:39:00.794183    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f2b8584f23"
	I0805 16:39:00.807513    4412 logs.go:123] Gathering logs for kube-scheduler [e1db204b999f] ...
	I0805 16:39:00.807527    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1db204b999f"
	I0805 16:39:00.822715    4412 logs.go:123] Gathering logs for kube-controller-manager [5364974de92f] ...
	I0805 16:39:00.822725    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5364974de92f"
	I0805 16:39:00.841021    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:39:00.841031    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:00.853511    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:00.853522    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:00.891528    4412 logs.go:123] Gathering logs for kube-apiserver [6885e196a1ab] ...
	I0805 16:39:00.891539    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6885e196a1ab"
	I0805 16:39:00.905565    4412 logs.go:123] Gathering logs for coredns [390a1bba5579] ...
	I0805 16:39:00.905575    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390a1bba5579"
	I0805 16:39:00.916635    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:00.916646    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:00.939415    4412 logs.go:123] Gathering logs for kube-apiserver [d7d11be02070] ...
	I0805 16:39:00.939425    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d11be02070"
	I0805 16:39:00.961381    4412 logs.go:123] Gathering logs for kube-controller-manager [8875e7fd4be2] ...
	I0805 16:39:00.961392    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8875e7fd4be2"
	I0805 16:39:00.972858    4412 logs.go:123] Gathering logs for storage-provisioner [aab25dc371c9] ...
	I0805 16:39:00.972868    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab25dc371c9"
	I0805 16:39:00.984196    4412 logs.go:123] Gathering logs for kube-scheduler [180b0af01672] ...
	I0805 16:39:00.984206    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180b0af01672"
	I0805 16:39:00.998413    4412 logs.go:123] Gathering logs for kube-proxy [11b60b1da135] ...
	I0805 16:39:00.998423    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b60b1da135"
	I0805 16:39:01.010181    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:01.010189    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:01.015185    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:01.015191    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:01.051511    4412 logs.go:123] Gathering logs for etcd [e1b955358bac] ...
	I0805 16:39:01.051523    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1b955358bac"
	I0805 16:39:03.567437    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:00.702412    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:00.702429    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:08.569935    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:08.569971    4412 kubeadm.go:597] duration metric: took 4m3.877478583s to restartPrimaryControlPlane
	W0805 16:39:08.570007    4412 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 16:39:08.570022    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 16:39:09.545410    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:39:09.550667    4412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:39:09.554338    4412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:39:09.557241    4412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:39:09.557247    4412 kubeadm.go:157] found existing configuration files:
	
	I0805 16:39:09.557272    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf
	I0805 16:39:09.559716    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:39:09.559742    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:39:09.562297    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf
	I0805 16:39:09.565264    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:39:09.565282    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:39:09.567976    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf
	I0805 16:39:09.570584    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:39:09.570606    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:39:09.573755    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf
	I0805 16:39:09.576528    4412 kubeadm.go:163] "https://control-plane.minikube.internal:50282" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50282 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:39:09.576549    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:39:09.579145    4412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:39:09.598278    4412 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 16:39:09.598324    4412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:39:09.652665    4412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:39:09.652771    4412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:39:09.652829    4412 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:39:09.708170    4412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:39:05.702910    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:05.702960    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:09.712401    4412 out.go:204]   - Generating certificates and keys ...
	I0805 16:39:09.712441    4412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:39:09.712481    4412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:39:09.712531    4412 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:39:09.712566    4412 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 16:39:09.712606    4412 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:39:09.712636    4412 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 16:39:09.712670    4412 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 16:39:09.712702    4412 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:39:09.712748    4412 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:39:09.712792    4412 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:39:09.712819    4412 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 16:39:09.712849    4412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:39:09.846137    4412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:39:09.919921    4412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:39:09.981526    4412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:39:10.181933    4412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:39:10.216100    4412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:39:10.216470    4412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:39:10.216511    4412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:39:10.300750    4412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:39:10.304633    4412 out.go:204]   - Booting up control plane ...
	I0805 16:39:10.304682    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:39:10.304786    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:39:10.305794    4412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:39:10.306543    4412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:39:10.307367    4412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 16:39:10.703722    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:10.703748    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:14.309419    4412 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002014 seconds
	I0805 16:39:14.309534    4412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:39:14.312965    4412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:39:14.820996    4412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:39:14.821133    4412 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-230000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:39:15.325042    4412 kubeadm.go:310] [bootstrap-token] Using token: bac5b6.noii76sbj0s4yru1
	I0805 16:39:15.331298    4412 out.go:204]   - Configuring RBAC rules ...
	I0805 16:39:15.331366    4412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:39:15.331407    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:39:15.335878    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:39:15.336896    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:39:15.340145    4412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:39:15.341628    4412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:39:15.345926    4412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:39:15.515808    4412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:39:15.729768    4412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:39:15.730271    4412 kubeadm.go:310] 
	I0805 16:39:15.730303    4412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:39:15.730307    4412 kubeadm.go:310] 
	I0805 16:39:15.730346    4412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:39:15.730351    4412 kubeadm.go:310] 
	I0805 16:39:15.730363    4412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:39:15.730392    4412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:39:15.730415    4412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:39:15.730418    4412 kubeadm.go:310] 
	I0805 16:39:15.730456    4412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:39:15.730463    4412 kubeadm.go:310] 
	I0805 16:39:15.730494    4412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:39:15.730498    4412 kubeadm.go:310] 
	I0805 16:39:15.730526    4412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:39:15.730573    4412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:39:15.730619    4412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:39:15.730622    4412 kubeadm.go:310] 
	I0805 16:39:15.730666    4412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:39:15.730716    4412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:39:15.730720    4412 kubeadm.go:310] 
	I0805 16:39:15.730757    4412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bac5b6.noii76sbj0s4yru1 \
	I0805 16:39:15.730804    4412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 \
	I0805 16:39:15.730816    4412 kubeadm.go:310] 	--control-plane 
	I0805 16:39:15.730821    4412 kubeadm.go:310] 
	I0805 16:39:15.730855    4412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:39:15.730858    4412 kubeadm.go:310] 
	I0805 16:39:15.730892    4412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bac5b6.noii76sbj0s4yru1 \
	I0805 16:39:15.730941    4412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 
	I0805 16:39:15.731002    4412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:39:15.731010    4412 cni.go:84] Creating CNI manager for ""
	I0805 16:39:15.731018    4412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:39:15.735635    4412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 16:39:15.743516    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 16:39:15.746904    4412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 16:39:15.751813    4412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:39:15.751860    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:39:15.751860    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-230000 minikube.k8s.io/updated_at=2024_08_05T16_39_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=running-upgrade-230000 minikube.k8s.io/primary=true
	I0805 16:39:15.754851    4412 ops.go:34] apiserver oom_adj: -16
	I0805 16:39:15.807308    4412 kubeadm.go:1113] duration metric: took 55.487709ms to wait for elevateKubeSystemPrivileges
	I0805 16:39:15.807324    4412 kubeadm.go:394] duration metric: took 4m11.128774792s to StartCluster
	I0805 16:39:15.807334    4412 settings.go:142] acquiring lock: {Name:mk8f45924d83b23294fe6a7ba250768dbca87de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:39:15.807418    4412 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:39:15.807798    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:39:15.807992    4412 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:39:15.808001    4412 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:39:15.808034    4412 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-230000"
	I0805 16:39:15.808036    4412 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-230000"
	I0805 16:39:15.808051    4412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-230000"
	I0805 16:39:15.808065    4412 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-230000"
	W0805 16:39:15.808070    4412 addons.go:243] addon storage-provisioner should already be in state true
	I0805 16:39:15.808081    4412 host.go:66] Checking if "running-upgrade-230000" exists ...
	I0805 16:39:15.808086    4412 config.go:182] Loaded profile config "running-upgrade-230000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:39:15.808943    4412 kapi.go:59] client config for running-upgrade-230000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/running-upgrade-230000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017e3e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:39:15.809067    4412 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-230000"
	W0805 16:39:15.809072    4412 addons.go:243] addon default-storageclass should already be in state true
	I0805 16:39:15.809080    4412 host.go:66] Checking if "running-upgrade-230000" exists ...
	I0805 16:39:15.812599    4412 out.go:177] * Verifying Kubernetes components...
	I0805 16:39:15.812944    4412 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:39:15.818779    4412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:39:15.818788    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:39:15.822579    4412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:39:15.826520    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:39:15.830545    4412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:39:15.830553    4412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:39:15.830559    4412 sshutil.go:53] new ssh client: &{IP:localhost Port:50250 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/running-upgrade-230000/id_rsa Username:docker}
	I0805 16:39:15.920555    4412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:39:15.926819    4412 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:39:15.926876    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:39:15.930941    4412 api_server.go:72] duration metric: took 122.93975ms to wait for apiserver process to appear ...
	I0805 16:39:15.930949    4412 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:39:15.930955    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:15.938003    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:39:15.996797    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:39:15.703883    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:15.703926    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:20.932975    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:20.933003    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:20.704868    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:20.704925    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:25.933190    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:25.933212    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:25.706237    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:25.706282    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:30.933439    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:30.933482    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:30.707897    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:30.707957    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:35.934083    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:35.934101    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:35.708395    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:35.708562    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:35.719540    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:35.719620    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:35.729807    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:35.729866    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:35.740105    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:35.740173    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:35.751045    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:35.751116    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:35.761920    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:35.761986    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:35.772087    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:35.772146    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:35.782693    4650 logs.go:276] 0 containers: []
	W0805 16:39:35.782706    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:35.782763    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:35.800920    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:35.800940    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:35.800947    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:35.880869    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:35.880882    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:35.892209    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:35.892220    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:35.931441    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:35.931452    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:35.935955    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:35.935964    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:35.948067    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:35.948078    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:35.966297    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:35.966307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:35.977478    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:35.977488    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:36.003755    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:36.003770    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:36.020544    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:36.020558    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:36.047755    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:36.047763    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:36.060539    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:36.060553    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:36.078297    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:36.078307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:36.100713    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:36.100726    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:36.115648    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:36.115661    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:36.127339    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:36.127350    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:36.141207    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:36.141223    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:38.658460    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:40.934586    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:40.934645    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:43.660725    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:43.660846    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:43.673054    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:43.673134    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:43.684252    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:43.684321    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:43.694910    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:43.694985    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:43.705640    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:43.705719    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:43.715974    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:43.716043    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:43.733169    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:43.733244    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:43.743838    4650 logs.go:276] 0 containers: []
	W0805 16:39:43.743851    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:43.743908    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:43.754515    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:43.754533    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:43.754539    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:43.792329    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:43.792340    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:43.803990    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:43.804004    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:43.815653    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:43.815664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:43.833552    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:43.833563    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:43.845962    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:43.845974    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:43.850581    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:43.850587    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:43.864614    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:43.864624    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:43.889139    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:43.889150    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:43.903395    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:43.903405    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:43.929193    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:43.929204    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:43.946433    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:43.946443    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:43.986733    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:43.986745    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:44.012708    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:44.012718    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:44.027310    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:44.027321    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:44.039709    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:44.039720    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:44.053531    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:44.053542    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:45.935503    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:45.935550    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 16:39:46.261803    4412 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 16:39:46.265157    4412 out.go:177] * Enabled addons: storage-provisioner
	I0805 16:39:46.272985    4412 addons.go:510] duration metric: took 30.465595458s for enable addons: enabled=[storage-provisioner]
	I0805 16:39:46.569236    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:50.936606    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:50.936676    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:51.571534    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:51.571646    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:51.583290    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:51.583370    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:51.594217    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:51.594292    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:51.605087    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:51.605157    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:51.615882    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:51.615951    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:51.626432    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:51.626504    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:51.637048    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:51.637112    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:51.647188    4650 logs.go:276] 0 containers: []
	W0805 16:39:51.647200    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:51.647259    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:51.657907    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:51.657925    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:51.657931    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:51.694844    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:51.694862    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:51.709455    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:51.709466    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:51.722287    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:51.722302    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:51.742336    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:51.742348    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:51.753872    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:51.753883    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:51.766613    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:51.766626    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:51.804684    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:51.804697    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:51.818890    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:51.818900    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:51.843757    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:51.843768    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:51.866148    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:51.866159    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:51.870792    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:51.870801    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:51.886454    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:51.886465    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:51.912482    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:51.912494    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:51.924911    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:51.924924    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:51.945882    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:51.945893    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:51.957660    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:51.957671    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:54.474892    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:55.937981    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:55.938054    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:59.477511    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:59.477630    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:59.490631    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:59.490704    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:59.502863    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:59.502928    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:59.513822    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:59.513892    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:59.525527    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:59.525601    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:59.538085    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:59.538154    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:59.548485    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:59.548555    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:59.559157    4650 logs.go:276] 0 containers: []
	W0805 16:39:59.559168    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:59.559222    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:59.570613    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:59.570631    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:59.570636    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:59.586971    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:59.586984    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:59.613383    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:59.613394    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:59.627462    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:59.627472    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:59.645438    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:59.645449    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:59.656494    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:59.656506    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:59.660602    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:59.660609    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:59.705911    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:59.705923    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:59.718519    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:59.718532    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:59.730207    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:59.730218    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:59.746115    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:59.746126    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:59.762048    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:59.762059    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:00.939924    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:00.939968    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:59.787813    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:59.787824    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:59.804265    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:59.804276    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:59.826204    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:59.826215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:59.842187    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:59.842200    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:59.881238    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:59.881249    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:02.397700    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:05.941365    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:05.941412    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:07.399976    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:07.400232    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:07.418651    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:07.418734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:07.432804    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:07.432874    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:07.444542    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:07.444615    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:07.455325    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:07.455400    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:07.465916    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:07.465984    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:07.476195    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:07.476263    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:07.486702    4650 logs.go:276] 0 containers: []
	W0805 16:40:07.486714    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:07.486765    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:07.496935    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:07.496953    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:07.496959    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:07.510721    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:07.510732    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:07.531883    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:07.531895    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:07.551980    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:07.551992    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:07.564374    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:07.564390    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:07.576813    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:07.576828    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:07.592217    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:07.592228    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:07.604044    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:07.604060    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:07.642865    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:07.642878    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:07.647661    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:07.647669    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:07.661815    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:07.661827    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:07.674294    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:07.674306    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:07.699549    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:07.699556    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:07.735107    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:07.735116    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:07.760559    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:07.760570    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:07.772309    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:07.772324    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:07.794234    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:07.794245    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:10.943083    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:10.943137    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:10.311663    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:15.945302    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:15.945395    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:15.956508    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:15.956574    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:15.966456    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:15.966524    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:15.977446    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:15.977513    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:15.989205    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:15.989274    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:15.999824    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:15.999903    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:16.010314    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:16.010382    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:16.020307    4412 logs.go:276] 0 containers: []
	W0805 16:40:16.020321    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:16.020383    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:16.031083    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:16.031098    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:16.031103    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:16.043841    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:16.043852    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:16.055855    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:16.055867    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:16.071236    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:16.071251    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:16.082966    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:16.082977    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:16.106318    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:16.106328    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:16.120847    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:16.120857    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:16.125595    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:16.125603    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:16.160753    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:16.160764    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:16.176314    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:16.176325    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:16.187838    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:16.187855    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:16.213162    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:16.213174    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:16.224645    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:16.224662    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:18.766069    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:15.313914    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:15.314100    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:15.331451    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:15.331521    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:15.346195    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:15.346260    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:15.356744    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:15.356812    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:15.367053    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:15.367122    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:15.377853    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:15.377926    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:15.389364    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:15.389427    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:15.399538    4650 logs.go:276] 0 containers: []
	W0805 16:40:15.399552    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:15.399600    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:15.414453    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:15.414472    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:15.414477    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:15.419335    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:15.419342    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:15.448070    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:15.448080    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:15.468949    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:15.468960    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:15.483645    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:15.483655    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:15.495494    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:15.495506    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:15.507572    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:15.507582    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:15.519773    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:15.519785    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:15.558979    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:15.558990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:15.578486    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:15.578499    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:15.591247    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:15.591261    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:15.616964    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:15.616975    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:15.653173    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:15.653188    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:15.667388    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:15.667400    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:15.684177    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:15.684188    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:15.695587    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:15.695598    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:15.713422    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:15.713434    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:18.225817    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:23.768191    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:23.768337    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:23.780596    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:23.780666    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:23.790849    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:23.790921    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:23.801151    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:23.801220    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:23.811647    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:23.811717    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:23.821586    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:23.821650    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:23.831954    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:23.832020    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:23.842046    4412 logs.go:276] 0 containers: []
	W0805 16:40:23.842057    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:23.842114    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:23.228067    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:23.228300    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:23.254451    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:23.254553    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:23.273166    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:23.273252    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:23.286561    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:23.286643    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:23.298149    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:23.298216    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:23.308490    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:23.308548    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:23.319283    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:23.319347    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:23.329523    4650 logs.go:276] 0 containers: []
	W0805 16:40:23.329534    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:23.329591    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:23.340447    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:23.340468    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:23.340474    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:23.354540    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:23.354550    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:23.379256    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:23.379267    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:23.402254    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:23.402263    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:23.441304    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:23.441313    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:23.452272    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:23.452287    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:23.463564    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:23.463575    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:23.477802    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:23.477812    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:23.491008    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:23.491021    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:23.495656    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:23.495663    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:23.512941    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:23.512956    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:23.531247    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:23.531257    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:23.556249    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:23.556258    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:23.594780    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:23.594793    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:23.609547    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:23.609560    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:23.631317    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:23.631329    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:23.643106    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:23.643118    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:23.856546    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:23.856560    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:23.856565    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:23.868266    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:23.868276    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:23.879823    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:23.879836    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:23.904479    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:23.904491    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:23.916209    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:23.916221    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:23.954627    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:23.954640    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:23.969345    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:23.969358    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:23.983801    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:23.983814    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:23.997903    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:23.997914    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:24.012459    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:24.012469    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:24.030296    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:24.030307    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:24.069698    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:24.069709    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:24.074838    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:24.074845    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:26.588226    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:26.164924    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:31.590712    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:31.590803    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:31.601463    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:31.601536    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:31.619409    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:31.619473    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:31.630351    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:31.630428    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:31.644072    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:31.644133    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:31.658386    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:31.658464    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:31.670075    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:31.670147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:31.680685    4412 logs.go:276] 0 containers: []
	W0805 16:40:31.680696    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:31.680750    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:31.691163    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:31.691181    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:31.691187    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:31.729932    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:31.729941    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:31.734508    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:31.734514    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:31.772075    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:31.772086    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:31.786884    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:31.786895    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:31.802131    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:31.802142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:31.814153    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:31.814164    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:31.829820    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:31.829830    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:31.843555    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:31.843569    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:31.855324    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:31.855334    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:31.866935    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:31.866948    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:31.884289    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:31.884302    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:31.907881    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:31.907889    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:31.167257    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:31.167466    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:31.187535    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:31.187624    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:31.201701    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:31.201776    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:31.213105    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:31.213177    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:31.223608    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:31.223675    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:31.234298    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:31.234368    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:31.244877    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:31.244950    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:31.254923    4650 logs.go:276] 0 containers: []
	W0805 16:40:31.254934    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:31.254994    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:31.265506    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:31.265523    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:31.265529    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:31.279693    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:31.279703    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:31.308731    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:31.308743    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:31.322770    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:31.322781    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:31.337940    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:31.337953    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:31.349255    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:31.349266    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:31.378371    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:31.378382    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:31.395930    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:31.395939    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:31.420926    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:31.420936    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:31.435330    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:31.435340    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:31.446630    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:31.446641    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:31.469743    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:31.469754    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:31.481327    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:31.481337    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:31.518026    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:31.518036    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:31.522160    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:31.522169    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:31.556659    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:31.556670    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:31.569246    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:31.569257    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:34.084180    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:34.421786    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:39.086419    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:39.086520    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:39.097836    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:39.097911    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:39.111108    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:39.111181    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:39.122009    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:39.122074    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:39.137076    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:39.137150    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:39.148068    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:39.148142    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:39.158860    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:39.158933    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:39.169644    4650 logs.go:276] 0 containers: []
	W0805 16:40:39.169655    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:39.169716    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:39.180535    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:39.180551    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:39.180556    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:39.202404    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:39.202415    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:39.213713    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:39.213725    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:39.238913    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:39.238923    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:39.252791    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:39.252801    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:39.264055    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:39.264067    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:39.281792    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:39.281802    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:39.296205    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:39.296218    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:39.308368    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:39.308378    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:39.345299    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:39.345307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:39.357072    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:39.357082    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:39.368004    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:39.368016    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:39.391349    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:39.391357    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:39.395731    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:39.395742    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:39.430687    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:39.430699    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:39.446547    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:39.446559    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:39.461978    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:39.461989    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:39.424017    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:39.424125    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:39.435681    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:39.435754    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:39.447359    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:39.447429    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:39.458908    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:39.458983    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:39.470544    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:39.470664    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:39.481224    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:39.481295    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:39.491913    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:39.491978    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:39.501714    4412 logs.go:276] 0 containers: []
	W0805 16:40:39.501725    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:39.501787    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:39.512327    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:39.512340    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:39.512345    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:39.523996    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:39.524009    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:39.535439    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:39.535449    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:39.539784    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:39.539790    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:39.576880    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:39.576890    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:39.590653    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:39.590668    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:39.602405    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:39.602419    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:39.617135    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:39.617148    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:39.635977    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:39.635987    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:39.647496    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:39.647506    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:39.672061    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:39.672068    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:39.711289    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:39.711307    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:39.725467    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:39.725511    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:42.239521    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:41.976777    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:47.241774    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:47.241864    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:47.254189    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:47.254259    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:47.265287    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:47.265360    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:47.276872    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:47.276945    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:47.292537    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:47.292603    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:47.304072    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:47.304147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:47.319674    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:47.319747    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:47.330257    4412 logs.go:276] 0 containers: []
	W0805 16:40:47.330264    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:47.330323    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:47.341286    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:47.341301    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:47.341307    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:47.357630    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:47.357642    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:47.373693    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:47.373701    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:47.386545    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:47.386556    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:47.405784    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:47.405793    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:47.421380    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:47.421390    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:47.425995    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:47.426002    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:47.461640    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:47.461651    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:47.474308    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:47.474321    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:47.489532    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:47.489543    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:47.501412    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:47.501423    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:47.526762    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:47.526769    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:47.538593    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:47.538607    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:46.979162    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:46.979591    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:47.015190    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:47.015320    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:47.034797    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:47.034889    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:47.049621    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:47.049701    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:47.061749    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:47.061823    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:47.072401    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:47.072470    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:47.083092    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:47.083162    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:47.093540    4650 logs.go:276] 0 containers: []
	W0805 16:40:47.093554    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:47.093611    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:47.103774    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:47.103820    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:47.103826    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:47.139099    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:47.139113    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:47.151027    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:47.151038    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:47.155843    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:47.155850    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:47.181076    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:47.181086    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:47.195372    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:47.195381    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:47.206764    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:47.206776    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:47.219040    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:47.219050    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:47.259307    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:47.259327    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:47.277935    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:47.277946    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:47.292668    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:47.292677    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:47.304948    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:47.304959    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:47.330231    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:47.330250    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:47.352962    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:47.352977    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:47.371979    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:47.371994    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:47.388678    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:47.388691    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:47.401703    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:47.401715    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:50.079807    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:49.916008    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:55.081887    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:55.081998    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:55.093280    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:40:55.093345    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:55.104372    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:40:55.104438    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:55.115632    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:40:55.115703    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:55.127189    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:40:55.127257    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:55.139584    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:40:55.139655    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:55.151789    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:40:55.151855    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:55.163220    4412 logs.go:276] 0 containers: []
	W0805 16:40:55.163229    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:55.163283    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:55.174453    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:40:55.174466    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:55.174472    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:55.179043    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:55.179053    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:55.215493    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:40:55.215508    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:40:55.231489    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:40:55.231501    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:40:55.244705    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:40:55.244716    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:40:55.257426    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:55.257438    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:55.283469    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:55.283482    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:55.325763    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:40:55.325779    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:40:55.341916    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:40:55.341930    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:40:55.357643    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:40:55.357654    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:40:55.369307    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:40:55.369318    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:40:55.380468    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:40:55.380479    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:40:55.397825    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:40:55.397835    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:57.911043    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:54.918238    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:54.918467    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:54.941035    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:54.941128    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:54.957384    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:54.957458    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:54.969676    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:54.969744    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:54.980807    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:54.980880    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:54.991842    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:54.991918    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:55.002356    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:55.002419    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:55.013428    4650 logs.go:276] 0 containers: []
	W0805 16:40:55.013443    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:55.013507    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:55.024657    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:55.024676    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:55.024682    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:55.039545    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:55.039558    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:55.051642    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:55.051654    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:55.070222    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:55.070232    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:55.090639    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:55.090652    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:55.095939    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:55.095949    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:55.134140    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:55.134153    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:55.161190    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:55.161209    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:55.201965    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:55.201990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:55.219062    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:55.219073    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:55.233208    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:55.233220    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:55.247936    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:55.247946    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:55.271593    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:55.271604    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:55.284724    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:55.284732    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:55.309078    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:55.309088    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:55.323446    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:55.323457    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:55.335521    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:55.335536    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:57.849599    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:02.912979    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:02.913065    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:02.924359    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:02.924434    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:02.936047    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:02.936127    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:02.951750    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:02.951819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:02.963566    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:02.963635    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:02.974892    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:02.974984    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:02.989039    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:02.989115    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:03.000297    4412 logs.go:276] 0 containers: []
	W0805 16:41:03.000307    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:03.000366    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:03.011815    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:03.011836    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:03.011843    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:03.024923    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:03.024934    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:03.050365    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:03.050375    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:03.054740    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:03.054746    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:03.074786    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:03.074801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:03.088631    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:03.088645    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:03.102411    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:03.102422    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:03.115297    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:03.115308    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:03.128133    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:03.128145    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:03.167595    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:03.167622    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:03.204888    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:03.204900    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:03.220297    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:03.220310    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:03.236106    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:03.236114    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:02.851754    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:02.851996    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:02.882864    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:02.882964    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:02.900666    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:02.900735    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:02.913447    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:02.913487    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:02.925179    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:02.925218    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:02.936096    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:02.936132    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:02.947524    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:02.947593    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:02.959165    4650 logs.go:276] 0 containers: []
	W0805 16:41:02.959177    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:02.959236    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:02.970551    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:02.970572    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:02.970577    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:02.982792    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:02.982804    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:03.002520    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:03.002532    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:03.015232    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:03.015244    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:03.056155    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:03.056165    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:03.100439    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:03.100453    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:03.112774    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:03.112789    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:03.138143    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:03.138155    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:03.143168    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:03.143177    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:03.157369    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:03.157379    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:03.169694    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:03.169702    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:03.192509    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:03.192525    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:03.208280    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:03.208289    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:03.234991    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:03.235007    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:03.249855    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:03.249866    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:03.264896    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:03.264907    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:03.277335    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:03.277345    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:05.757516    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:05.790096    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:10.759815    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:10.760204    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:10.789031    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:10.789157    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:10.807271    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:10.807364    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:10.821980    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:10.822062    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:10.834620    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:10.834696    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:10.846289    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:10.846365    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:10.858658    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:10.858732    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:10.871983    4412 logs.go:276] 0 containers: []
	W0805 16:41:10.871990    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:10.872019    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:10.883838    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:10.883850    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:10.883854    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:10.898910    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:10.898924    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:10.911605    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:10.911615    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:10.927776    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:10.927789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:10.946553    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:10.946563    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:10.959445    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:10.959456    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:10.985575    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:10.985591    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:10.990659    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:10.990669    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:11.028359    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:11.028372    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:11.043593    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:11.043606    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:11.056291    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:11.056304    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:11.076706    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:11.076718    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:11.089736    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:11.089744    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:13.633697    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:10.790796    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:10.790908    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:10.808889    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:10.808941    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:10.822596    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:10.822633    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:10.836659    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:10.836731    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:10.847979    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:10.848039    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:10.859823    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:10.859865    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:10.871497    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:10.871569    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:10.883244    4650 logs.go:276] 0 containers: []
	W0805 16:41:10.883263    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:10.883366    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:10.894763    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:10.894781    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:10.894786    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:10.900203    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:10.900212    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:10.916019    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:10.916035    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:10.947751    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:10.947759    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:10.967558    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:10.967569    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:11.006225    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:11.006244    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:11.020949    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:11.020961    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:11.036962    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:11.036973    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:11.049425    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:11.049437    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:11.076104    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:11.076120    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:11.088727    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:11.088740    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:11.125125    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:11.125136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:11.138869    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:11.138885    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:11.151125    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:11.151136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:11.162685    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:11.162697    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:11.186238    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:11.186247    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:11.198134    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:11.198146    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:13.718385    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:18.635958    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:18.636183    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:18.651035    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:18.651109    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:18.663230    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:18.663297    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:18.673632    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:18.673697    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:18.684213    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:18.684273    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:18.694792    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:18.694854    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:18.704904    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:18.704969    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:18.715529    4412 logs.go:276] 0 containers: []
	W0805 16:41:18.715539    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:18.715592    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:18.727938    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:18.727954    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:18.727960    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:18.733032    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:18.733039    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:18.748572    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:18.748583    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:18.761258    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:18.761268    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:18.777534    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:18.777547    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:18.803375    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:18.803389    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:18.815963    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:18.815975    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:18.829406    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:18.829417    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:18.720610    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:18.720689    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:18.732327    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:18.732398    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:18.744022    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:18.744086    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:18.759002    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:18.759079    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:18.772267    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:18.772344    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:18.784636    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:18.784712    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:18.796335    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:18.796407    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:18.806824    4650 logs.go:276] 0 containers: []
	W0805 16:41:18.806835    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:18.806899    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:18.818748    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:18.818764    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:18.818771    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:18.841576    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:18.841586    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:18.854511    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:18.854523    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:18.867032    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:18.867044    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:18.881273    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:18.881284    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:18.923137    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:18.923148    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:18.938139    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:18.938148    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:18.953257    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:18.953267    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:18.965678    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:18.965689    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:18.981071    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:18.981086    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:18.992335    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:18.992346    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:19.007453    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:19.007463    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:19.032797    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:19.032809    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:19.037281    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:19.037290    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:19.062113    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:19.062125    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:19.082887    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:19.082899    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:19.122168    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:19.122181    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:18.870267    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:18.870277    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:18.908031    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:18.908045    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:18.923994    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:18.924005    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:18.937685    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:18.937695    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:18.950569    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:18.950584    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:21.471652    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:21.635633    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:26.473784    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:26.474198    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:26.514149    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:26.514284    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:26.536434    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:26.536525    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:26.551838    4412 logs.go:276] 2 containers: [bbc24100193e b6a5ca2c0447]
	I0805 16:41:26.551917    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:26.564546    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:26.564625    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:26.575778    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:26.575852    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:26.586776    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:26.586844    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:26.597378    4412 logs.go:276] 0 containers: []
	W0805 16:41:26.597389    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:26.597443    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:26.608532    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:26.608549    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:26.608558    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:26.654624    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:26.654635    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:26.670954    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:26.670969    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:26.697301    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:26.697313    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:26.738201    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:26.738210    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:26.743312    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:26.743328    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:26.756812    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:26.756824    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:26.770212    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:26.770226    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:26.783582    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:26.783592    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:26.802611    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:26.802624    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:26.815319    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:26.815332    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:26.827921    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:26.827933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:26.843412    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:26.843423    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:26.637916    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:26.638028    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:26.650309    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:26.650400    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:26.668117    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:26.668191    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:26.680312    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:26.680386    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:26.692689    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:26.692765    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:26.704069    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:26.704139    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:26.715309    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:26.715380    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:26.725846    4650 logs.go:276] 0 containers: []
	W0805 16:41:26.725859    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:26.725921    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:26.736991    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:26.737009    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:26.737014    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:26.749466    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:26.749480    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:26.762437    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:26.762449    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:26.780991    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:26.781007    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:26.793834    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:26.793847    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:26.806863    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:26.806874    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:26.822500    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:26.822516    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:26.839615    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:26.839628    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:26.862124    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:26.862138    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:26.885826    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:26.885834    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:26.927610    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:26.927621    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:26.957172    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:26.957181    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:26.968610    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:26.968624    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:27.007655    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:27.007667    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:27.012017    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:27.012023    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:27.026233    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:27.026248    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:27.041950    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:27.041961    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:29.558974    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:29.360020    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:34.561090    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:34.561169    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:34.572566    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:34.572630    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:34.584546    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:34.584609    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:34.596025    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:34.596089    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:34.607394    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:34.607461    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:34.619269    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:34.619345    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:34.630708    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:34.630786    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:34.644241    4650 logs.go:276] 0 containers: []
	W0805 16:41:34.644250    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:34.644309    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:34.661652    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:34.661672    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:34.661678    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:34.700507    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:34.700521    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:34.717589    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:34.717601    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:34.732780    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:34.732794    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:34.772203    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:34.772215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:34.362146    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:34.362280    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:34.381122    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:34.381196    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:34.392554    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:34.392620    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:34.407301    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:34.407374    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:34.417508    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:34.417575    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:34.427539    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:34.427606    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:34.437720    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:34.437784    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:34.447770    4412 logs.go:276] 0 containers: []
	W0805 16:41:34.447781    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:34.447836    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:34.458190    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:34.458215    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:34.458221    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:34.469226    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:34.469238    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:34.503834    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:34.503844    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:34.515387    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:34.515397    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:34.539187    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:34.539198    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:34.578538    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:34.578558    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:34.583342    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:34.583353    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:34.598393    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:34.598404    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:34.610853    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:34.610870    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:34.628102    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:34.628114    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:34.644063    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:34.644079    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:34.656954    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:34.656966    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:34.670370    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:34.670381    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:34.686263    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:34.686276    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:34.698049    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:34.698060    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:37.226524    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:34.784773    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:34.784784    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:34.806035    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:34.806046    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:34.823661    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:34.823672    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:34.835174    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:34.835186    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:34.846720    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:34.846733    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:34.871430    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:34.871439    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:34.897257    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:34.897268    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:34.911380    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:34.911392    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:34.927559    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:34.927571    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:34.939126    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:34.939137    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:34.943461    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:34.943468    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:34.956936    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:34.956947    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:37.471463    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:42.228697    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:42.228841    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:42.240921    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:42.240994    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:42.257520    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:42.257589    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:42.268364    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:42.268433    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:42.278905    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:42.278975    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:42.289116    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:42.289181    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:42.299785    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:42.299853    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:42.310185    4412 logs.go:276] 0 containers: []
	W0805 16:41:42.310196    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:42.310249    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:42.320704    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:42.320725    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:42.320730    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:42.325621    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:42.325631    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:42.361282    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:42.361292    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:42.372781    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:42.372791    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:42.396106    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:42.396118    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:42.430411    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:42.430422    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:42.445499    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:42.445510    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:42.484837    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:42.484857    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:42.497934    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:42.497944    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:42.514398    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:42.514414    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:42.540269    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:42.540283    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:42.557283    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:42.557295    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:42.572076    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:42.572088    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:42.592337    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:42.592352    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:42.604364    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:42.604374    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:42.474012    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:42.474159    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:42.485605    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:42.485674    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:42.496825    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:42.496903    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:42.508531    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:42.508603    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:42.524506    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:42.524581    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:42.536905    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:42.536980    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:42.548632    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:42.548711    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:42.560068    4650 logs.go:276] 0 containers: []
	W0805 16:41:42.560079    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:42.560139    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:42.572112    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:42.572128    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:42.572133    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:42.612863    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:42.612881    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:42.654429    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:42.654440    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:42.680844    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:42.680853    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:42.695967    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:42.695979    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:42.716753    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:42.716764    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:42.728901    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:42.728912    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:42.740428    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:42.740438    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:42.744639    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:42.744645    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:42.759275    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:42.759285    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:42.771336    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:42.771349    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:42.789227    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:42.789238    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:42.804244    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:42.804254    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:42.819457    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:42.819470    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:42.830946    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:42.830957    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:42.842592    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:42.842603    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:42.865375    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:42.865384    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:45.122739    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:45.379572    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:50.125005    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:50.125147    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:50.141318    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:50.141402    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:50.154303    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:50.154378    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:50.166438    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:50.166509    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:50.177182    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:50.177245    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:50.187447    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:50.187518    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:50.198031    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:50.198095    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:50.210970    4412 logs.go:276] 0 containers: []
	W0805 16:41:50.210980    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:50.211034    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:50.221516    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:50.221533    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:50.221538    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:50.233225    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:50.233239    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:50.244523    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:50.244536    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:50.281467    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:50.281489    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:50.296432    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:50.296447    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:50.308610    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:50.308619    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:50.323751    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:50.323762    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:50.335968    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:50.335978    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:50.347619    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:50.347630    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:50.361961    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:50.361971    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:50.382558    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:50.382568    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:50.400753    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:50.400766    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:50.426277    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:50.426287    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:50.431695    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:50.431706    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:50.471620    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:50.471628    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:52.987275    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:50.381717    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:50.381823    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:50.396827    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:50.396900    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:50.412151    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:50.412218    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:50.423255    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:50.423322    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:50.437203    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:50.437276    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:50.448433    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:50.448512    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:50.459841    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:50.459920    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:50.471598    4650 logs.go:276] 0 containers: []
	W0805 16:41:50.471609    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:50.471671    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:50.483560    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:50.483580    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:50.483585    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:50.498916    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:50.498927    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:50.510433    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:50.510445    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:50.522269    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:50.522280    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:50.544151    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:50.544163    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:50.558798    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:50.558812    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:50.581261    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:50.581270    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:50.595251    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:50.595262    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:50.610209    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:50.610220    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:50.622181    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:50.622191    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:50.659566    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:50.659577    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:50.694690    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:50.694701    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:50.719448    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:50.719460    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:50.730841    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:50.730851    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:50.742274    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:50.742287    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:50.746690    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:50.746698    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:50.764853    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:50.764864    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:53.279006    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:57.989610    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:57.989859    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:58.015896    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:41:58.016003    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:58.032462    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:41:58.032544    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:58.045769    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:41:58.045843    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:58.056417    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:41:58.056493    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:58.067379    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:41:58.067453    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:58.078055    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:41:58.078125    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:58.088091    4412 logs.go:276] 0 containers: []
	W0805 16:41:58.088101    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:58.088156    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:58.098767    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:41:58.098784    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:41:58.098789    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:41:58.115885    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:41:58.115901    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:41:58.130731    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:41:58.130745    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:41:58.142866    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:41:58.142876    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:41:58.154558    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:41:58.154570    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:41:58.167157    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:41:58.167169    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:41:58.185681    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:58.185695    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:58.211056    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:41:58.211073    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:41:58.229558    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:41:58.229572    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:41:58.242715    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:41:58.242729    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:41:58.253864    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:41:58.253875    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:58.265821    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:58.265837    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:58.304717    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:58.304743    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:58.309697    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:58.309707    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:58.351691    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:41:58.351704    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:41:58.279577    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:58.279663    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:58.290656    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:58.290734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:58.305811    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:58.305880    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:58.317364    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:58.317443    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:58.328688    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:58.328764    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:58.340356    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:58.340436    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:58.351932    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:58.352000    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:58.364682    4650 logs.go:276] 0 containers: []
	W0805 16:41:58.364696    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:58.364762    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:58.375868    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:58.375888    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:58.375893    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:58.414995    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:58.415011    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:58.419496    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:58.419505    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:58.433381    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:58.433395    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:58.458108    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:58.458121    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:58.469843    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:58.469853    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:58.485046    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:58.485056    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:58.502094    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:58.502105    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:58.514190    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:58.514202    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:58.531805    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:58.531816    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:58.546512    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:58.546526    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:58.559700    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:58.559709    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:58.594165    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:58.594178    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:58.608519    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:58.608527    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:58.627019    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:58.627029    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:58.647665    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:58.647676    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:58.669728    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:58.669739    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:00.867350    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:01.183906    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:05.869613    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:05.869819    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:05.886968    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:05.887047    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:05.899556    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:05.899632    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:05.914351    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:05.914428    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:05.924600    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:05.924672    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:05.935420    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:05.935483    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:05.945868    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:05.945935    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:05.956203    4412 logs.go:276] 0 containers: []
	W0805 16:42:05.956219    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:05.956280    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:05.966856    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:05.966871    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:05.966876    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:05.988792    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:05.988801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:06.000533    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:06.000544    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:06.012069    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:06.012082    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:06.023633    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:06.023644    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:06.059487    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:06.059499    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:06.074042    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:06.074051    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:06.085204    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:06.085215    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:06.124142    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:06.124150    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:06.135864    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:06.135877    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:06.147858    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:06.147871    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:06.175670    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:06.175685    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:06.201507    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:06.201516    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:06.206829    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:06.206840    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:06.222033    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:06.222041    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:08.745154    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:06.186111    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:06.186182    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:06.197515    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:06.197587    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:06.209405    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:06.209474    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:06.221144    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:06.221210    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:06.233035    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:06.233097    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:06.244343    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:06.244406    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:06.255956    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:06.256024    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:06.266507    4650 logs.go:276] 0 containers: []
	W0805 16:42:06.266520    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:06.266577    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:06.277218    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:06.277238    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:06.277243    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:06.314792    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:06.314803    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:06.339652    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:06.339664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:06.351088    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:06.351101    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:06.362827    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:06.362840    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:06.376637    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:06.376648    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:06.388419    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:06.388434    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:06.403549    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:06.403561    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:06.415323    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:06.415334    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:06.432931    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:06.432942    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:06.444223    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:06.444236    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:06.466088    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:06.466103    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:06.503901    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:06.503914    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:06.517834    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:06.517846    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:06.533737    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:06.533748    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:06.546455    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:06.546466    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:06.550837    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:06.550846    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:09.074462    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:13.747420    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:13.747635    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:13.765458    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:13.765546    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:13.778502    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:13.778576    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:13.790179    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:13.790243    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:13.800787    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:13.800854    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:13.811084    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:13.811137    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:13.822091    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:13.822160    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:13.832738    4412 logs.go:276] 0 containers: []
	W0805 16:42:13.832749    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:13.832801    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:13.843037    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:13.843056    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:13.843062    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:14.076641    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:14.076734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:14.094239    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:14.094316    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:14.105703    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:14.105783    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:14.116500    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:14.116557    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:14.126983    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:14.127050    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:14.142012    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:14.142086    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:14.152513    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:14.152577    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:14.162901    4650 logs.go:276] 0 containers: []
	W0805 16:42:14.162916    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:14.162968    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:14.178025    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:14.178044    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:14.178051    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:14.218008    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:14.218019    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:14.254324    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:14.254335    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:14.268663    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:14.268674    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:14.280891    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:14.280904    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:14.292875    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:14.292887    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:14.304893    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:14.304906    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:14.311488    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:14.311500    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:14.336404    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:14.336419    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:14.362533    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:14.362543    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:14.373840    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:14.373851    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:14.398164    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:14.398177    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:14.412107    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:14.412117    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:14.434799    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:14.434808    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:14.446144    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:14.446156    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:14.467603    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:14.467613    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:14.484063    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:14.484074    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:13.880523    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:13.880533    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:13.894917    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:13.894927    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:13.906805    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:13.906816    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:13.921370    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:13.921381    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:13.946191    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:13.946200    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:13.957872    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:13.957882    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:13.969410    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:13.969420    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:13.986576    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:13.986587    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:13.991156    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:13.991165    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:14.003224    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:14.003235    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:14.017959    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:14.017968    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:14.031736    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:14.031748    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:14.067561    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:14.067572    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:14.087574    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:14.087588    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:16.602471    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:16.998387    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:21.604765    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:21.604987    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:21.617791    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:21.617876    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:21.629145    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:21.629209    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:21.639805    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:21.639880    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:21.650480    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:21.650553    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:21.661360    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:21.661430    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:21.676971    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:21.677043    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:21.693621    4412 logs.go:276] 0 containers: []
	W0805 16:42:21.693631    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:21.693691    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:21.704193    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:21.704210    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:21.704215    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:21.738778    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:21.738788    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:21.750809    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:21.750819    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:21.790204    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:21.790213    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:21.805871    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:21.805883    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:21.817791    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:21.817801    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:21.829594    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:21.829603    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:21.852865    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:21.852873    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:21.864112    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:21.864122    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:21.868673    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:21.868680    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:21.882120    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:21.882130    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:21.893375    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:21.893383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:21.905037    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:21.905048    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:21.922653    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:21.922666    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:21.934374    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:21.934383    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:22.000624    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:22.000739    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:22.011551    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:22.011622    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:22.022162    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:22.022221    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:22.033576    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:22.033636    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:22.044019    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:22.044082    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:22.058995    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:22.059058    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:22.072650    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:22.072725    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:22.082497    4650 logs.go:276] 0 containers: []
	W0805 16:42:22.082510    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:22.082557    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:22.095988    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:22.096003    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:22.096010    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:22.107950    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:22.107960    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:22.119523    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:22.119535    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:22.131165    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:22.131174    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:22.164711    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:22.164725    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:22.186093    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:22.186104    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:22.203121    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:22.203130    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:22.214275    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:22.214289    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:22.229206    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:22.229215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:22.240310    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:22.240322    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:22.258411    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:22.258424    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:22.282307    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:22.282323    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:22.295051    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:22.295066    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:22.333364    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:22.333381    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:22.337531    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:22.337540    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:22.366618    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:22.366637    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:22.396571    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:22.396586    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:24.452859    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:24.913075    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:29.455026    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:29.455221    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:29.468269    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:29.468346    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:29.479526    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:29.479596    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:29.490466    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:29.490540    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:29.508239    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:29.508304    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:29.519118    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:29.519186    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:29.530659    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:29.530731    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:29.544647    4412 logs.go:276] 0 containers: []
	W0805 16:42:29.544658    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:29.544717    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:29.555529    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:29.555546    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:29.555551    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:29.580253    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:29.580262    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:29.617909    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:29.617922    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:29.633845    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:29.633862    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:29.645373    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:29.645386    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:29.659963    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:29.659972    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:29.677779    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:29.677791    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:29.689643    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:29.689652    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:29.701494    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:29.701505    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:29.706558    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:29.706564    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:29.721262    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:29.721275    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:29.738746    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:29.738759    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:29.750721    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:29.750731    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:29.762422    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:29.762432    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:29.800816    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:29.800826    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:32.315507    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:29.915186    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:29.915292    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:29.930347    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:29.930421    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:29.942115    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:29.942184    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:29.953061    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:29.953131    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:29.963834    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:29.963899    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:29.975478    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:29.975543    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:29.986698    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:29.986766    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:29.997261    4650 logs.go:276] 0 containers: []
	W0805 16:42:29.997272    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:29.997327    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:30.007625    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:30.007644    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:30.007649    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:30.047309    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:30.047319    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:30.051455    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:30.051462    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:30.064737    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:30.064752    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:30.079695    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:30.079706    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:30.091746    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:30.091757    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:30.102980    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:30.102991    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:30.124675    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:30.124683    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:30.136937    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:30.136947    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:30.172040    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:30.172052    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:30.186177    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:30.186187    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:30.211125    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:30.211136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:30.226431    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:30.226443    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:30.244483    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:30.244492    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:30.259236    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:30.259247    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:30.270459    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:30.270471    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:30.291782    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:30.291793    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:32.805470    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:37.317839    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:37.318214    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:37.351987    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:37.352111    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:37.373434    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:37.373529    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:37.387236    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:37.387315    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:37.401744    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:37.401815    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:37.412939    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:37.413009    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:37.427462    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:37.427529    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:37.437364    4412 logs.go:276] 0 containers: []
	W0805 16:42:37.437378    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:37.437439    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:37.447791    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:37.447808    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:37.447814    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:37.452227    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:37.452235    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:37.467085    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:37.467098    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:37.482023    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:37.482037    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:37.496131    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:37.496142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:37.509832    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:37.509846    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:37.521647    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:37.521657    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:37.562391    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:37.562402    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:37.598038    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:37.598049    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:37.610611    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:37.610622    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:37.625687    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:37.625698    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:37.643241    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:37.643251    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:37.654597    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:37.654606    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:37.679880    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:37.679891    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:37.691361    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:37.691370    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:37.807760    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:37.807828    4650 kubeadm.go:597] duration metric: took 4m3.955374s to restartPrimaryControlPlane
	W0805 16:42:37.807866    4650 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 16:42:37.807885    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 16:42:38.773604    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:42:38.778837    4650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:42:38.781641    4650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:42:38.784565    4650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:42:38.784572    4650 kubeadm.go:157] found existing configuration files:
	
	I0805 16:42:38.784593    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0805 16:42:38.787206    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:42:38.787234    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:42:38.790264    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0805 16:42:38.793516    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:42:38.793537    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:42:38.797044    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0805 16:42:38.799996    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:42:38.800016    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:42:38.802644    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0805 16:42:38.805501    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:42:38.805521    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:42:38.808602    4650 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:42:38.824282    4650 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 16:42:38.824322    4650 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:42:38.875973    4650 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:42:38.876031    4650 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:42:38.876089    4650 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:42:38.924234    4650 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:42:38.932383    4650 out.go:204]   - Generating certificates and keys ...
	I0805 16:42:38.932423    4650 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:42:38.932460    4650 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:42:38.932497    4650 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:42:38.932528    4650 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 16:42:38.932569    4650 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:42:38.932598    4650 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 16:42:38.932631    4650 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 16:42:38.932661    4650 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:42:38.932721    4650 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:42:38.932773    4650 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:42:38.932796    4650 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 16:42:38.932840    4650 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:42:39.074663    4650 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:42:39.220547    4650 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:42:39.320055    4650 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:42:39.414354    4650 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:42:39.443240    4650 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:42:39.443655    4650 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:42:39.443680    4650 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:42:39.527342    4650 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:42:39.530647    4650 out.go:204]   - Booting up control plane ...
	I0805 16:42:39.530722    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:42:39.530767    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:42:39.530821    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:42:39.530863    4650 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:42:39.530969    4650 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 16:42:40.205139    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:44.032737    4650 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506058 seconds
	I0805 16:42:44.032811    4650 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:42:44.036887    4650 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:42:44.545529    4650 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:42:44.545638    4650 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-596000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:42:45.049998    4650 kubeadm.go:310] [bootstrap-token] Using token: bx3rbc.9i1vtplwmfu92vdl
	I0805 16:42:45.056431    4650 out.go:204]   - Configuring RBAC rules ...
	I0805 16:42:45.056497    4650 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:42:45.056540    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:42:45.058412    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:42:45.062835    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:42:45.063808    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:42:45.064710    4650 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:42:45.068098    4650 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:42:45.253608    4650 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:42:45.454448    4650 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:42:45.454808    4650 kubeadm.go:310] 
	I0805 16:42:45.454860    4650 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:42:45.454867    4650 kubeadm.go:310] 
	I0805 16:42:45.454906    4650 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:42:45.454911    4650 kubeadm.go:310] 
	I0805 16:42:45.454927    4650 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:42:45.454982    4650 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:42:45.455017    4650 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:42:45.455022    4650 kubeadm.go:310] 
	I0805 16:42:45.455048    4650 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:42:45.455052    4650 kubeadm.go:310] 
	I0805 16:42:45.455074    4650 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:42:45.455077    4650 kubeadm.go:310] 
	I0805 16:42:45.455101    4650 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:42:45.455143    4650 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:42:45.455182    4650 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:42:45.455185    4650 kubeadm.go:310] 
	I0805 16:42:45.455237    4650 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:42:45.455278    4650 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:42:45.455281    4650 kubeadm.go:310] 
	I0805 16:42:45.455329    4650 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bx3rbc.9i1vtplwmfu92vdl \
	I0805 16:42:45.455391    4650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 \
	I0805 16:42:45.455408    4650 kubeadm.go:310] 	--control-plane 
	I0805 16:42:45.455413    4650 kubeadm.go:310] 
	I0805 16:42:45.455455    4650 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:42:45.455457    4650 kubeadm.go:310] 
	I0805 16:42:45.455504    4650 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bx3rbc.9i1vtplwmfu92vdl \
	I0805 16:42:45.455555    4650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 
	I0805 16:42:45.455671    4650 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:42:45.455682    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:42:45.455689    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:42:45.459823    4650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 16:42:45.467687    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 16:42:45.470735    4650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 16:42:45.475816    4650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:42:45.475907    4650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-596000 minikube.k8s.io/updated_at=2024_08_05T16_42_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=stopped-upgrade-596000 minikube.k8s.io/primary=true
	I0805 16:42:45.475946    4650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:42:45.479198    4650 ops.go:34] apiserver oom_adj: -16
	I0805 16:42:45.530771    4650 kubeadm.go:1113] duration metric: took 54.919625ms to wait for elevateKubeSystemPrivileges
	I0805 16:42:45.530871    4650 kubeadm.go:394] duration metric: took 4m11.692310292s to StartCluster
	I0805 16:42:45.530887    4650 settings.go:142] acquiring lock: {Name:mk8f45924d83b23294fe6a7ba250768dbca87de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:42:45.530998    4650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:42:45.531466    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:42:45.531679    4650 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:42:45.531718    4650 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:42:45.531755    4650 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-596000"
	I0805 16:42:45.531768    4650 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-596000"
	W0805 16:42:45.531774    4650 addons.go:243] addon storage-provisioner should already be in state true
	I0805 16:42:45.531787    4650 host.go:66] Checking if "stopped-upgrade-596000" exists ...
	I0805 16:42:45.531783    4650 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-596000"
	I0805 16:42:45.531807    4650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-596000"
	I0805 16:42:45.531827    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:42:45.532832    4650 kapi.go:59] client config for stopped-upgrade-596000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a97e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:42:45.532951    4650 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-596000"
	W0805 16:42:45.532956    4650 addons.go:243] addon default-storageclass should already be in state true
	I0805 16:42:45.532963    4650 host.go:66] Checking if "stopped-upgrade-596000" exists ...
	I0805 16:42:45.535871    4650 out.go:177] * Verifying Kubernetes components...
	I0805 16:42:45.536196    4650 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:42:45.539878    4650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:42:45.539886    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:42:45.543786    4650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:42:45.207263    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:45.207372    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:45.218311    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:45.218383    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:45.229278    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:45.229340    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:45.240908    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:45.240978    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:45.253898    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:45.253969    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:45.266469    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:45.266544    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:45.278473    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:45.278538    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:45.289872    4412 logs.go:276] 0 containers: []
	W0805 16:42:45.289883    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:45.289937    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:45.300983    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:45.300999    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:45.301004    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:45.313898    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:45.313909    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:45.329614    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:45.329626    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:45.344937    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:45.344950    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:45.360922    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:45.360933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:45.378532    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:45.378551    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:45.397523    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:45.397538    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:45.411250    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:45.411263    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:45.452196    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:45.452211    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:45.492511    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:45.492522    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:45.505625    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:45.505638    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:45.518738    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:45.518750    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:45.531841    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:45.531849    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:45.556703    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:45.556716    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:45.562370    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:45.562379    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:48.088924    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:45.546779    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:42:45.550796    4650 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:42:45.550802    4650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:42:45.550808    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:42:45.641943    4650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:42:45.647287    4650 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:42:45.647330    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:42:45.651126    4650 api_server.go:72] duration metric: took 119.437417ms to wait for apiserver process to appear ...
	I0805 16:42:45.651135    4650 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:42:45.651143    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:45.707452    4650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:42:45.720450    4650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:42:53.091128    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:53.091242    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:53.103327    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:42:53.103390    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:53.113843    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:42:53.113912    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:53.127628    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:42:53.127704    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:53.137874    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:42:53.137938    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:53.148133    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:42:53.148206    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:53.160627    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:42:53.160691    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:53.171383    4412 logs.go:276] 0 containers: []
	W0805 16:42:53.171393    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:53.171451    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:53.181763    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:42:53.181784    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:53.181789    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:53.206408    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:53.206416    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:53.210719    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:42:53.210725    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:42:53.224989    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:42:53.224999    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:42:53.236560    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:42:53.236570    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:42:53.251812    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:42:53.251822    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:42:53.278495    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:42:53.278507    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:42:53.290752    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:42:53.290763    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:42:53.304851    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:42:53.304863    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:42:53.317197    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:53.317207    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:53.355584    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:42:53.355594    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:42:53.366904    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:42:53.366914    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:42:53.378942    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:53.378951    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:53.414131    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:42:53.414142    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:42:53.426092    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:42:53.426103    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:50.653207    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:50.653264    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:55.939943    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:55.653587    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:55.653617    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:00.942006    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:00.942101    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:00.955013    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:43:00.955090    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:00.965995    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:43:00.966062    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:00.976459    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:43:00.976525    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:00.990423    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:43:00.990496    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:01.001059    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:43:01.001123    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:01.012185    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:43:01.012240    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:01.022341    4412 logs.go:276] 0 containers: []
	W0805 16:43:01.022356    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:01.022414    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:01.032907    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:43:01.032923    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:01.032927    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:01.037808    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:43:01.037816    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:43:01.055745    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:01.055759    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:01.080283    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:01.080293    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:01.119701    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:43:01.119711    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:43:01.131867    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:43:01.131878    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:43:01.147024    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:43:01.147035    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:43:01.164076    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:43:01.164086    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:43:01.177544    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:43:01.177553    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:43:01.189011    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:43:01.189022    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:43:01.200604    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:43:01.200617    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:43:01.211919    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:43:01.211930    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:43:01.223245    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:01.223255    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:01.260806    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:43:01.260818    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:43:01.273967    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:43:01.273979    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:03.788390    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:00.653896    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:00.653939    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:08.790521    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:08.790730    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:08.804071    4412 logs.go:276] 1 containers: [9a2065bdb854]
	I0805 16:43:08.804151    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:08.814950    4412 logs.go:276] 1 containers: [27bfaf19ec6b]
	I0805 16:43:08.815021    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:08.825814    4412 logs.go:276] 4 containers: [db5cf4b1fb93 89ba262d9a2c bbc24100193e b6a5ca2c0447]
	I0805 16:43:08.825885    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:08.836377    4412 logs.go:276] 1 containers: [17b616d43405]
	I0805 16:43:08.836443    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:08.847318    4412 logs.go:276] 1 containers: [532ba0dd9289]
	I0805 16:43:08.847389    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:05.654410    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:05.654462    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:08.858085    4412 logs.go:276] 1 containers: [925ef8e92894]
	I0805 16:43:08.858153    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:08.868652    4412 logs.go:276] 0 containers: []
	W0805 16:43:08.868663    4412 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:08.868721    4412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:08.879803    4412 logs.go:276] 1 containers: [615d1f3eda2b]
	I0805 16:43:08.879819    4412 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:08.879824    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:08.948922    4412 logs.go:123] Gathering logs for kube-proxy [532ba0dd9289] ...
	I0805 16:43:08.948933    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532ba0dd9289"
	I0805 16:43:08.961368    4412 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:08.961378    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:09.001651    4412 logs.go:123] Gathering logs for kube-apiserver [9a2065bdb854] ...
	I0805 16:43:09.001660    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2065bdb854"
	I0805 16:43:09.016551    4412 logs.go:123] Gathering logs for coredns [bbc24100193e] ...
	I0805 16:43:09.016575    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc24100193e"
	I0805 16:43:09.028548    4412 logs.go:123] Gathering logs for container status ...
	I0805 16:43:09.028559    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:09.040628    4412 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:09.040642    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:09.045459    4412 logs.go:123] Gathering logs for etcd [27bfaf19ec6b] ...
	I0805 16:43:09.045468    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bfaf19ec6b"
	I0805 16:43:09.059513    4412 logs.go:123] Gathering logs for coredns [89ba262d9a2c] ...
	I0805 16:43:09.059522    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89ba262d9a2c"
	I0805 16:43:09.071023    4412 logs.go:123] Gathering logs for kube-scheduler [17b616d43405] ...
	I0805 16:43:09.071032    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b616d43405"
	I0805 16:43:09.086647    4412 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:09.086660    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:09.110636    4412 logs.go:123] Gathering logs for coredns [db5cf4b1fb93] ...
	I0805 16:43:09.110647    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db5cf4b1fb93"
	I0805 16:43:09.122478    4412 logs.go:123] Gathering logs for coredns [b6a5ca2c0447] ...
	I0805 16:43:09.122492    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6a5ca2c0447"
	I0805 16:43:09.134345    4412 logs.go:123] Gathering logs for kube-controller-manager [925ef8e92894] ...
	I0805 16:43:09.134356    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925ef8e92894"
	I0805 16:43:09.152084    4412 logs.go:123] Gathering logs for storage-provisioner [615d1f3eda2b] ...
	I0805 16:43:09.152094    4412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615d1f3eda2b"
	I0805 16:43:11.665474    4412 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:10.655039    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:10.655080    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:15.655845    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:15.655904    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 16:43:16.029602    4650 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 16:43:16.034895    4650 out.go:177] * Enabled addons: storage-provisioner
	I0805 16:43:16.666991    4412 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:16.671578    4412 out.go:177] 
	W0805 16:43:16.675549    4412 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 16:43:16.675559    4412 out.go:239] * 
	W0805 16:43:16.676358    4412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:43:16.687419    4412 out.go:177] 
	I0805 16:43:16.042772    4650 addons.go:510] duration metric: took 30.511670625s for enable addons: enabled=[storage-provisioner]
	I0805 16:43:20.656603    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:20.656642    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:25.657835    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:25.657864    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-05 23:34:16 UTC, ends at Mon 2024-08-05 23:43:32 UTC. --
	Aug 05 23:43:17 running-upgrade-230000 dockerd[3232]: time="2024-08-05T23:43:17.395074760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:43:17 running-upgrade-230000 dockerd[3232]: time="2024-08-05T23:43:17.395198464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:43:17 running-upgrade-230000 dockerd[3232]: time="2024-08-05T23:43:17.395223713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:43:17 running-upgrade-230000 dockerd[3232]: time="2024-08-05T23:43:17.395300336Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec29261dbb7552f2c5fb4170b5ba77b5c51db612b6dbe569d625dc9064fd3253 pid=18853 runtime=io.containerd.runc.v2
	Aug 05 23:43:18 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:18Z" level=error msg="ContainerStats resp: {0x4000747a00 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x40009ac040 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x4000a18580 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x4000a18780 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x4000a19340 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x40009ac400 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x40009ac600 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=error msg="ContainerStats resp: {0x40009acc00 linux}"
	Aug 05 23:43:19 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 23:43:24 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 23:43:29 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:29Z" level=error msg="ContainerStats resp: {0x4000747580 linux}"
	Aug 05 23:43:29 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:29Z" level=error msg="ContainerStats resp: {0x40006960c0 linux}"
	Aug 05 23:43:29 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 23:43:30 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:30Z" level=error msg="ContainerStats resp: {0x400084eb00 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x400009c840 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x400084f840 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x40003dc6c0 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x40003dcbc0 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x40003dd040 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x40003dd640 linux}"
	Aug 05 23:43:31 running-upgrade-230000 cri-dockerd[3074]: time="2024-08-05T23:43:31Z" level=error msg="ContainerStats resp: {0x40003dc100 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	789bb727a0e51       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   3e787043e6252
	ec29261dbb755       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   65a14f7250a63
	db5cf4b1fb939       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3e787043e6252
	89ba262d9a2c2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   65a14f7250a63
	615d1f3eda2bc       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   fee4030b1c1e3
	532ba0dd92890       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   047a374472f69
	17b616d434053       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a0f7712fc9ec2
	27bfaf19ec6bd       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   89d0b374028d0
	925ef8e928944       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   7914b064efb04
	9a2065bdb854e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   06d389baaadd7
	
	
	==> coredns [789bb727a0e5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4951703583017122130.978897703304626889. HINFO: read udp 10.244.0.3:36671->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4951703583017122130.978897703304626889. HINFO: read udp 10.244.0.3:35097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4951703583017122130.978897703304626889. HINFO: read udp 10.244.0.3:57634->10.0.2.3:53: i/o timeout
	
	
	==> coredns [89ba262d9a2c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:46587->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:41064->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:52610->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:45713->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:42488->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:33877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:34511->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:45112->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:43328->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 626589037418007383.3469497078775026833. HINFO: read udp 10.244.0.2:33679->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db5cf4b1fb93] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:42453->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:40287->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:56738->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:44607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:47860->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:35526->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:40554->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:54396->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:48104->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7673194006746877282.7004141083750229650. HINFO: read udp 10.244.0.3:35940->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec29261dbb75] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5427281033050833381.4124364423722066239. HINFO: read udp 10.244.0.2:44801->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5427281033050833381.4124364423722066239. HINFO: read udp 10.244.0.2:52692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5427281033050833381.4124364423722066239. HINFO: read udp 10.244.0.2:47297->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-230000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-230000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=running-upgrade-230000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_39_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:39:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-230000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:43:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:39:15 +0000   Mon, 05 Aug 2024 23:39:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:39:15 +0000   Mon, 05 Aug 2024 23:39:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:39:15 +0000   Mon, 05 Aug 2024 23:39:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:39:15 +0000   Mon, 05 Aug 2024 23:39:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-230000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a834a429ab39401896f32aa987c1aab1
	  System UUID:                a834a429ab39401896f32aa987c1aab1
	  Boot ID:                    205c0a1a-28c7-499a-b10f-17002976557d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fm959                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-p4gkf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-230000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-230000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-230000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-dzzsx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-230000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-230000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-230000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-230000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-230000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-230000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-230000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-230000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-230000 event: Registered Node running-upgrade-230000 in Controller
	
	
	==> dmesg <==
	[  +1.681426] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.074745] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.077308] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.144828] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.096366] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.082814] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.831032] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +9.153395] systemd-fstab-generator[1933]: Ignoring "noauto" for root device
	[  +2.878242] systemd-fstab-generator[2217]: Ignoring "noauto" for root device
	[  +0.160529] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.093790] systemd-fstab-generator[2262]: Ignoring "noauto" for root device
	[  +0.093066] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[ +13.240516] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.209790] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.099883] systemd-fstab-generator[3042]: Ignoring "noauto" for root device
	[  +0.082729] systemd-fstab-generator[3053]: Ignoring "noauto" for root device
	[  +0.085405] systemd-fstab-generator[3067]: Ignoring "noauto" for root device
	[Aug 5 23:35] systemd-fstab-generator[3219]: Ignoring "noauto" for root device
	[  +2.754631] systemd-fstab-generator[3613]: Ignoring "noauto" for root device
	[  +1.206860] systemd-fstab-generator[3870]: Ignoring "noauto" for root device
	[ +18.224822] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 5 23:39] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.447429] systemd-fstab-generator[11924]: Ignoring "noauto" for root device
	[  +5.131509] systemd-fstab-generator[12515]: Ignoring "noauto" for root device
	[  +0.486611] systemd-fstab-generator[12648]: Ignoring "noauto" for root device
	
	
	==> etcd [27bfaf19ec6b] <==
	{"level":"info","ts":"2024-08-05T23:39:11.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-05T23:39:11.652Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-05T23:39:11.679Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T23:39:11.679Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:39:11.679Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T23:39:11.679Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T23:39:11.679Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:11.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:11.745Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-230000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:39:11.745Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:11.745Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-05T23:39:11.745Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:11.745Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:11.746Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:39:11.752Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:11.752Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:11.771Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:11.771Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:11.771Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:43:33 up 9 min,  0 users,  load average: 0.17, 0.33, 0.17
	Linux running-upgrade-230000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9a2065bdb854] <==
	I0805 23:39:13.316661       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0805 23:39:13.334744       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:39:13.335840       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:39:13.340233       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0805 23:39:13.340385       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0805 23:39:13.342045       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:39:13.353156       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0805 23:39:14.067071       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0805 23:39:14.241145       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:39:14.245611       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:39:14.245639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:39:14.389622       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:39:14.400346       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:39:14.413732       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0805 23:39:14.416048       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0805 23:39:14.416454       1 controller.go:611] quota admission added evaluator for: endpoints
	I0805 23:39:14.417696       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:39:15.387730       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0805 23:39:15.622843       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0805 23:39:15.626051       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0805 23:39:15.635780       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0805 23:39:15.674536       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:39:28.691779       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0805 23:39:28.942931       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0805 23:39:29.190349       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [925ef8e92894] <==
	I0805 23:39:28.240700       1 shared_informer.go:262] Caches are synced for cronjob
	I0805 23:39:28.325528       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0805 23:39:28.326625       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0805 23:39:28.328787       1 shared_informer.go:262] Caches are synced for stateful set
	I0805 23:39:28.384394       1 shared_informer.go:262] Caches are synced for ephemeral
	I0805 23:39:28.390392       1 shared_informer.go:262] Caches are synced for PVC protection
	I0805 23:39:28.391472       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 23:39:28.403230       1 shared_informer.go:262] Caches are synced for taint
	I0805 23:39:28.403294       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0805 23:39:28.403355       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-230000. Assuming now as a timestamp.
	I0805 23:39:28.403396       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0805 23:39:28.403367       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0805 23:39:28.403491       1 event.go:294] "Event occurred" object="running-upgrade-230000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-230000 event: Registered Node running-upgrade-230000 in Controller"
	I0805 23:39:28.403884       1 shared_informer.go:262] Caches are synced for persistent volume
	I0805 23:39:28.411492       1 shared_informer.go:262] Caches are synced for expand
	I0805 23:39:28.437555       1 shared_informer.go:262] Caches are synced for attach detach
	I0805 23:39:28.440660       1 shared_informer.go:262] Caches are synced for PV protection
	I0805 23:39:28.441763       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 23:39:28.698751       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dzzsx"
	I0805 23:39:28.852357       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 23:39:28.890395       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 23:39:28.890413       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0805 23:39:28.944549       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0805 23:39:29.244467       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-p4gkf"
	I0805 23:39:29.248137       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fm959"
	
	
	==> kube-proxy [532ba0dd9289] <==
	I0805 23:39:29.177884       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0805 23:39:29.177909       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0805 23:39:29.177920       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0805 23:39:29.188142       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0805 23:39:29.188157       1 server_others.go:206] "Using iptables Proxier"
	I0805 23:39:29.188171       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0805 23:39:29.188265       1 server.go:661] "Version info" version="v1.24.1"
	I0805 23:39:29.188269       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:39:29.188486       1 config.go:317] "Starting service config controller"
	I0805 23:39:29.188493       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0805 23:39:29.188501       1 config.go:226] "Starting endpoint slice config controller"
	I0805 23:39:29.188502       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0805 23:39:29.189476       1 config.go:444] "Starting node config controller"
	I0805 23:39:29.189521       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0805 23:39:29.288575       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0805 23:39:29.288599       1 shared_informer.go:262] Caches are synced for service config
	I0805 23:39:29.289604       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [17b616d43405] <==
	W0805 23:39:13.302804       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:39:13.302826       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:39:13.302871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:39:13.302888       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:39:13.302922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:13.302954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:13.302985       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:13.303016       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:13.303049       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:39:13.303065       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:39:13.303109       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:39:13.303126       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:39:13.303152       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:39:13.303232       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 23:39:13.303289       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:39:13.303317       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:39:14.118046       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 23:39:14.118103       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 23:39:14.198743       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:39:14.198774       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:39:14.315753       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 23:39:14.315770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 23:39:14.342543       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:39:14.342649       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:39:14.895859       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-05 23:34:16 UTC, ends at Mon 2024-08-05 23:43:33 UTC. --
	Aug 05 23:39:17 running-upgrade-230000 kubelet[12521]: E0805 23:39:17.456207   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-230000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-230000"
	Aug 05 23:39:17 running-upgrade-230000 kubelet[12521]: E0805 23:39:17.654478   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-230000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-230000"
	Aug 05 23:39:17 running-upgrade-230000 kubelet[12521]: I0805 23:39:17.851813   12521 request.go:601] Waited for 1.138232299s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 05 23:39:17 running-upgrade-230000 kubelet[12521]: E0805 23:39:17.854929   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-230000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-230000"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.261439   12521 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.261862   12521 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.409168   12521 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.567000   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a370a3b2-acdd-463f-a8b3-2a6db0ceef1f-tmp\") pod \"storage-provisioner\" (UID: \"a370a3b2-acdd-463f-a8b3-2a6db0ceef1f\") " pod="kube-system/storage-provisioner"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.567053   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpxx\" (UniqueName: \"kubernetes.io/projected/a370a3b2-acdd-463f-a8b3-2a6db0ceef1f-kube-api-access-7fpxx\") pod \"storage-provisioner\" (UID: \"a370a3b2-acdd-463f-a8b3-2a6db0ceef1f\") " pod="kube-system/storage-provisioner"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: E0805 23:39:28.671499   12521 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: E0805 23:39:28.671518   12521 projected.go:192] Error preparing data for projected volume kube-api-access-7fpxx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: E0805 23:39:28.671549   12521 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/a370a3b2-acdd-463f-a8b3-2a6db0ceef1f-kube-api-access-7fpxx podName:a370a3b2-acdd-463f-a8b3-2a6db0ceef1f nodeName:}" failed. No retries permitted until 2024-08-05 23:39:29.171536853 +0000 UTC m=+13.562656388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7fpxx" (UniqueName: "kubernetes.io/projected/a370a3b2-acdd-463f-a8b3-2a6db0ceef1f-kube-api-access-7fpxx") pod "storage-provisioner" (UID: "a370a3b2-acdd-463f-a8b3-2a6db0ceef1f") : configmap "kube-root-ca.crt" not found
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.701998   12521 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.869518   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ad1e66f-5844-42e5-8132-a4c6c205d1b7-kube-proxy\") pod \"kube-proxy-dzzsx\" (UID: \"3ad1e66f-5844-42e5-8132-a4c6c205d1b7\") " pod="kube-system/kube-proxy-dzzsx"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.869549   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlfh\" (UniqueName: \"kubernetes.io/projected/3ad1e66f-5844-42e5-8132-a4c6c205d1b7-kube-api-access-5mlfh\") pod \"kube-proxy-dzzsx\" (UID: \"3ad1e66f-5844-42e5-8132-a4c6c205d1b7\") " pod="kube-system/kube-proxy-dzzsx"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.869560   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ad1e66f-5844-42e5-8132-a4c6c205d1b7-lib-modules\") pod \"kube-proxy-dzzsx\" (UID: \"3ad1e66f-5844-42e5-8132-a4c6c205d1b7\") " pod="kube-system/kube-proxy-dzzsx"
	Aug 05 23:39:28 running-upgrade-230000 kubelet[12521]: I0805 23:39:28.869569   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ad1e66f-5844-42e5-8132-a4c6c205d1b7-xtables-lock\") pod \"kube-proxy-dzzsx\" (UID: \"3ad1e66f-5844-42e5-8132-a4c6c205d1b7\") " pod="kube-system/kube-proxy-dzzsx"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.246675   12521 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.250876   12521 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.272163   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73805262-d591-413c-96d0-76a1cd28fb1f-config-volume\") pod \"coredns-6d4b75cb6d-p4gkf\" (UID: \"73805262-d591-413c-96d0-76a1cd28fb1f\") " pod="kube-system/coredns-6d4b75cb6d-p4gkf"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.272181   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/613590d8-e777-4ba1-a80a-6ab6fae1ed1a-config-volume\") pod \"coredns-6d4b75cb6d-fm959\" (UID: \"613590d8-e777-4ba1-a80a-6ab6fae1ed1a\") " pod="kube-system/coredns-6d4b75cb6d-fm959"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.272192   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltx2q\" (UniqueName: \"kubernetes.io/projected/73805262-d591-413c-96d0-76a1cd28fb1f-kube-api-access-ltx2q\") pod \"coredns-6d4b75cb6d-p4gkf\" (UID: \"73805262-d591-413c-96d0-76a1cd28fb1f\") " pod="kube-system/coredns-6d4b75cb6d-p4gkf"
	Aug 05 23:39:29 running-upgrade-230000 kubelet[12521]: I0805 23:39:29.373170   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdjql\" (UniqueName: \"kubernetes.io/projected/613590d8-e777-4ba1-a80a-6ab6fae1ed1a-kube-api-access-gdjql\") pod \"coredns-6d4b75cb6d-fm959\" (UID: \"613590d8-e777-4ba1-a80a-6ab6fae1ed1a\") " pod="kube-system/coredns-6d4b75cb6d-fm959"
	Aug 05 23:43:17 running-upgrade-230000 kubelet[12521]: I0805 23:43:17.933147   12521 scope.go:110] "RemoveContainer" containerID="b6a5ca2c044757723ad6f254449fd35cd68e272be9cd2bd598df36b0107bfb82"
	Aug 05 23:43:17 running-upgrade-230000 kubelet[12521]: I0805 23:43:17.946619   12521 scope.go:110] "RemoveContainer" containerID="bbc24100193e49d4df0967d543ee2267c4d68ece587550554ee35c05b4ff74f4"
	
	
	==> storage-provisioner [615d1f3eda2b] <==
	I0805 23:39:29.491095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:39:29.495659       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:39:29.495876       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:39:29.498493       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:39:29.498596       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-230000_40445bdd-77c1-4d33-8acc-239ef9320249!
	I0805 23:39:29.499004       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"423b6923-ea6a-49f4-b0ae-d9013ece175e", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-230000_40445bdd-77c1-4d33-8acc-239ef9320249 became leader
	I0805 23:39:29.601930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-230000_40445bdd-77c1-4d33-8acc-239ef9320249!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-230000 -n running-upgrade-230000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-230000 -n running-upgrade-230000: exit status 2 (15.55932925s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-230000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-230000
--- FAIL: TestRunningBinaryUpgrade (599.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.760525542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-967000" primary control-plane node in "kubernetes-upgrade-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:49.558219    4536 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:49.558349    4536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:49.558359    4536 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:49.558366    4536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:49.558504    4536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:36:49.559561    4536 out.go:298] Setting JSON to false
	I0805 16:36:49.575718    4536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3980,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:36:49.575804    4536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:36:49.582240    4536 out.go:177] * [kubernetes-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:36:49.588105    4536 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:36:49.588167    4536 notify.go:220] Checking for updates...
	I0805 16:36:49.594991    4536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:36:49.598115    4536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:36:49.601145    4536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:36:49.602573    4536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:36:49.606063    4536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:36:49.609414    4536 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:49.609482    4536 config.go:182] Loaded profile config "running-upgrade-230000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:36:49.609523    4536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:36:49.613936    4536 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:36:49.621081    4536 start.go:297] selected driver: qemu2
	I0805 16:36:49.621090    4536 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:36:49.621097    4536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:36:49.623330    4536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:36:49.626117    4536 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:36:49.629216    4536 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 16:36:49.629229    4536 cni.go:84] Creating CNI manager for ""
	I0805 16:36:49.629238    4536 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 16:36:49.629276    4536 start.go:340] cluster config:
	{Name:kubernetes-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:36:49.633158    4536 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:36:49.641022    4536 out.go:177] * Starting "kubernetes-upgrade-967000" primary control-plane node in "kubernetes-upgrade-967000" cluster
	I0805 16:36:49.645141    4536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 16:36:49.645162    4536 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 16:36:49.645175    4536 cache.go:56] Caching tarball of preloaded images
	I0805 16:36:49.645237    4536 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:36:49.645242    4536 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 16:36:49.645314    4536 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kubernetes-upgrade-967000/config.json ...
	I0805 16:36:49.645325    4536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kubernetes-upgrade-967000/config.json: {Name:mk8c1ea9a6de68cc5403955437d736deaa82c957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:36:49.645690    4536 start.go:360] acquireMachinesLock for kubernetes-upgrade-967000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:36:49.645724    4536 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "kubernetes-upgrade-967000"
	I0805 16:36:49.645733    4536 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:36:49.645758    4536 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:36:49.654037    4536 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:36:49.669380    4536 start.go:159] libmachine.API.Create for "kubernetes-upgrade-967000" (driver="qemu2")
	I0805 16:36:49.669419    4536 client.go:168] LocalClient.Create starting
	I0805 16:36:49.669495    4536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:36:49.669527    4536 main.go:141] libmachine: Decoding PEM data...
	I0805 16:36:49.669536    4536 main.go:141] libmachine: Parsing certificate...
	I0805 16:36:49.669577    4536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:36:49.669600    4536 main.go:141] libmachine: Decoding PEM data...
	I0805 16:36:49.669609    4536 main.go:141] libmachine: Parsing certificate...
	I0805 16:36:49.669968    4536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:36:49.816786    4536 main.go:141] libmachine: Creating SSH key...
	I0805 16:36:49.879342    4536 main.go:141] libmachine: Creating Disk image...
	I0805 16:36:49.879352    4536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:36:49.879531    4536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:49.888649    4536 main.go:141] libmachine: STDOUT: 
	I0805 16:36:49.888669    4536 main.go:141] libmachine: STDERR: 
	I0805 16:36:49.888729    4536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2 +20000M
	I0805 16:36:49.896734    4536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:36:49.896748    4536 main.go:141] libmachine: STDERR: 
	I0805 16:36:49.896769    4536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:49.896776    4536 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:36:49.896787    4536 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:36:49.896819    4536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:23:fa:06:54:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:49.898353    4536 main.go:141] libmachine: STDOUT: 
	I0805 16:36:49.898368    4536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:36:49.898391    4536 client.go:171] duration metric: took 228.970667ms to LocalClient.Create
	I0805 16:36:51.900571    4536 start.go:128] duration metric: took 2.254828209s to createHost
	I0805 16:36:51.900674    4536 start.go:83] releasing machines lock for "kubernetes-upgrade-967000", held for 2.254984709s
	W0805 16:36:51.900766    4536 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:36:51.916132    4536 out.go:177] * Deleting "kubernetes-upgrade-967000" in qemu2 ...
	W0805 16:36:51.942900    4536 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:36:51.942963    4536 start.go:729] Will try again in 5 seconds ...
	I0805 16:36:56.945059    4536 start.go:360] acquireMachinesLock for kubernetes-upgrade-967000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:36:56.945345    4536 start.go:364] duration metric: took 205.584µs to acquireMachinesLock for "kubernetes-upgrade-967000"
	I0805 16:36:56.945404    4536 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:36:56.945524    4536 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:36:56.955977    4536 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:36:56.998603    4536 start.go:159] libmachine.API.Create for "kubernetes-upgrade-967000" (driver="qemu2")
	I0805 16:36:56.998651    4536 client.go:168] LocalClient.Create starting
	I0805 16:36:56.998803    4536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:36:56.998872    4536 main.go:141] libmachine: Decoding PEM data...
	I0805 16:36:56.998889    4536 main.go:141] libmachine: Parsing certificate...
	I0805 16:36:56.998950    4536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:36:56.999002    4536 main.go:141] libmachine: Decoding PEM data...
	I0805 16:36:56.999013    4536 main.go:141] libmachine: Parsing certificate...
	I0805 16:36:56.999466    4536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:36:57.162409    4536 main.go:141] libmachine: Creating SSH key...
	I0805 16:36:57.227897    4536 main.go:141] libmachine: Creating Disk image...
	I0805 16:36:57.227903    4536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:36:57.228060    4536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:57.237171    4536 main.go:141] libmachine: STDOUT: 
	I0805 16:36:57.237191    4536 main.go:141] libmachine: STDERR: 
	I0805 16:36:57.237247    4536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2 +20000M
	I0805 16:36:57.245168    4536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:36:57.245184    4536 main.go:141] libmachine: STDERR: 
	I0805 16:36:57.245194    4536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:57.245198    4536 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:36:57.245211    4536 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:36:57.245245    4536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:0b:85:6d:1a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:36:57.246812    4536 main.go:141] libmachine: STDOUT: 
	I0805 16:36:57.246829    4536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:36:57.246841    4536 client.go:171] duration metric: took 248.190667ms to LocalClient.Create
	I0805 16:36:59.249006    4536 start.go:128] duration metric: took 2.3034925s to createHost
	I0805 16:36:59.249100    4536 start.go:83] releasing machines lock for "kubernetes-upgrade-967000", held for 2.303778875s
	W0805 16:36:59.249553    4536 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:36:59.259234    4536 out.go:177] 
	W0805 16:36:59.266370    4536 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:36:59.266401    4536 out.go:239] * 
	* 
	W0805 16:36:59.269102    4536 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:36:59.276240    4536 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-967000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-967000: (3.560392625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-967000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-967000 status --format={{.Host}}: exit status 7 (52.193542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179890917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-967000" primary control-plane node in "kubernetes-upgrade-967000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:37:02.934748    4587 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:37:02.934892    4587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:02.934895    4587 out.go:304] Setting ErrFile to fd 2...
	I0805 16:37:02.934897    4587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:02.935036    4587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:37:02.936113    4587 out.go:298] Setting JSON to false
	I0805 16:37:02.953016    4587 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3993,"bootTime":1722897029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:37:02.953101    4587 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:37:02.958678    4587 out.go:177] * [kubernetes-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:37:02.965681    4587 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:37:02.965724    4587 notify.go:220] Checking for updates...
	I0805 16:37:02.971648    4587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:37:02.974638    4587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:37:02.977650    4587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:37:02.980654    4587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:37:02.981957    4587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:37:02.984856    4587 config.go:182] Loaded profile config "kubernetes-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 16:37:02.985121    4587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:37:02.989611    4587 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:37:02.994594    4587 start.go:297] selected driver: qemu2
	I0805 16:37:02.994600    4587 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:02.994645    4587 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:37:02.996889    4587 cni.go:84] Creating CNI manager for ""
	I0805 16:37:02.996905    4587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:37:02.996925    4587 start.go:340] cluster config:
	{Name:kubernetes-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-967000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:03.000170    4587 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:03.007517    4587 out.go:177] * Starting "kubernetes-upgrade-967000" primary control-plane node in "kubernetes-upgrade-967000" cluster
	I0805 16:37:03.011564    4587 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 16:37:03.011578    4587 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 16:37:03.011585    4587 cache.go:56] Caching tarball of preloaded images
	I0805 16:37:03.011637    4587 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:37:03.011643    4587 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 16:37:03.011694    4587 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kubernetes-upgrade-967000/config.json ...
	I0805 16:37:03.012038    4587 start.go:360] acquireMachinesLock for kubernetes-upgrade-967000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:37:03.012070    4587 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "kubernetes-upgrade-967000"
	I0805 16:37:03.012078    4587 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:37:03.012085    4587 fix.go:54] fixHost starting: 
	I0805 16:37:03.012195    4587 fix.go:112] recreateIfNeeded on kubernetes-upgrade-967000: state=Stopped err=<nil>
	W0805 16:37:03.012203    4587 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:37:03.020581    4587 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-967000" ...
	I0805 16:37:03.024612    4587 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:37:03.024646    4587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:0b:85:6d:1a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:37:03.026502    4587 main.go:141] libmachine: STDOUT: 
	I0805 16:37:03.026520    4587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:37:03.026546    4587 fix.go:56] duration metric: took 14.460458ms for fixHost
	I0805 16:37:03.026550    4587 start.go:83] releasing machines lock for "kubernetes-upgrade-967000", held for 14.475917ms
	W0805 16:37:03.026555    4587 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:37:03.026594    4587 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:37:03.026598    4587 start.go:729] Will try again in 5 seconds ...
	I0805 16:37:08.028726    4587 start.go:360] acquireMachinesLock for kubernetes-upgrade-967000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:37:08.029253    4587 start.go:364] duration metric: took 381.875µs to acquireMachinesLock for "kubernetes-upgrade-967000"
	I0805 16:37:08.029340    4587 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:37:08.029360    4587 fix.go:54] fixHost starting: 
	I0805 16:37:08.029997    4587 fix.go:112] recreateIfNeeded on kubernetes-upgrade-967000: state=Stopped err=<nil>
	W0805 16:37:08.030016    4587 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:37:08.038810    4587 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-967000" ...
	I0805 16:37:08.041854    4587 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:37:08.042113    4587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:0b:85:6d:1a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubernetes-upgrade-967000/disk.qcow2
	I0805 16:37:08.050117    4587 main.go:141] libmachine: STDOUT: 
	I0805 16:37:08.050179    4587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:37:08.050266    4587 fix.go:56] duration metric: took 20.907875ms for fixHost
	I0805 16:37:08.050286    4587 start.go:83] releasing machines lock for "kubernetes-upgrade-967000", held for 20.993583ms
	W0805 16:37:08.050479    4587 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:37:08.058867    4587 out.go:177] 
	W0805 16:37:08.061914    4587 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:37:08.061932    4587 out.go:239] * 
	* 
	W0805 16:37:08.063323    4587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:37:08.077064    4587 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-967000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-967000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-967000 version --output=json: exit status 1 (56.660125ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-967000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-05 16:37:08.144442 -0700 PDT m=+3018.581321834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-967000 -n kubernetes-upgrade-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-967000 -n kubernetes-upgrade-967000: exit status 7 (31.545167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-967000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-967000
--- FAIL: TestKubernetesUpgrade (18.72s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19373
- KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2316030961/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19373
- KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2035965792/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (577.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2051410073 start -p stopped-upgrade-596000 --memory=2200 --vm-driver=qemu2 
E0805 16:37:48.841950    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2051410073 start -p stopped-upgrade-596000 --memory=2200 --vm-driver=qemu2 : (52.412710916s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2051410073 -p stopped-upgrade-596000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2051410073 -p stopped-upgrade-596000 stop: (3.115656292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-596000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0805 16:41:06.589856    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:42:48.835459    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-596000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.039154584s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-596000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-596000" primary control-plane node in "stopped-upgrade-596000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-596000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:38:04.782446    4650 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:38:04.782593    4650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:38:04.782597    4650 out.go:304] Setting ErrFile to fd 2...
	I0805 16:38:04.782600    4650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:38:04.782777    4650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:38:04.783936    4650 out.go:298] Setting JSON to false
	I0805 16:38:04.804231    4650 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4055,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:38:04.804308    4650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:38:04.808976    4650 out.go:177] * [stopped-upgrade-596000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:38:04.816935    4650 notify.go:220] Checking for updates...
	I0805 16:38:04.821926    4650 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:38:04.829826    4650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:38:04.833873    4650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:38:04.836826    4650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:38:04.839856    4650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:38:04.842884    4650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:38:04.846111    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:38:04.848882    4650 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 16:38:04.851888    4650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:38:04.855833    4650 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:38:04.862889    4650 start.go:297] selected driver: qemu2
	I0805 16:38:04.862896    4650 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:04.862941    4650 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:38:04.865767    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:38:04.865783    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:38:04.865815    4650 start.go:340] cluster config:
	{Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:04.865864    4650 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:38:04.873890    4650 out.go:177] * Starting "stopped-upgrade-596000" primary control-plane node in "stopped-upgrade-596000" cluster
	I0805 16:38:04.877681    4650 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:38:04.877698    4650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 16:38:04.877707    4650 cache.go:56] Caching tarball of preloaded images
	I0805 16:38:04.877769    4650 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:38:04.877775    4650 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 16:38:04.877830    4650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/config.json ...
	I0805 16:38:04.878304    4650 start.go:360] acquireMachinesLock for stopped-upgrade-596000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:38:04.878331    4650 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "stopped-upgrade-596000"
	I0805 16:38:04.878339    4650 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:38:04.878346    4650 fix.go:54] fixHost starting: 
	I0805 16:38:04.878452    4650 fix.go:112] recreateIfNeeded on stopped-upgrade-596000: state=Stopped err=<nil>
	W0805 16:38:04.878462    4650 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:38:04.882864    4650 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-596000" ...
	I0805 16:38:04.890798    4650 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:38:04.890867    4650 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50468-:22,hostfwd=tcp::50469-:2376,hostname=stopped-upgrade-596000 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/disk.qcow2
	I0805 16:38:04.939397    4650 main.go:141] libmachine: STDOUT: 
	I0805 16:38:04.939423    4650 main.go:141] libmachine: STDERR: 
	I0805 16:38:04.939428    4650 main.go:141] libmachine: Waiting for VM to start (ssh -p 50468 docker@127.0.0.1)...
	I0805 16:38:24.740138    4650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/config.json ...
	I0805 16:38:24.740807    4650 machine.go:94] provisionDockerMachine start ...
	I0805 16:38:24.740980    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.741467    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.741481    4650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:38:24.835294    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:38:24.835328    4650 buildroot.go:166] provisioning hostname "stopped-upgrade-596000"
	I0805 16:38:24.835454    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.835713    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.835725    4650 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-596000 && echo "stopped-upgrade-596000" | sudo tee /etc/hostname
	I0805 16:38:24.924580    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-596000
	
	I0805 16:38:24.924693    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:24.924910    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:24.924923    4650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:38:25.001144    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:38:25.001160    4650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1054/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1054/.minikube}
	I0805 16:38:25.001171    4650 buildroot.go:174] setting up certificates
	I0805 16:38:25.001177    4650 provision.go:84] configureAuth start
	I0805 16:38:25.001187    4650 provision.go:143] copyHostCerts
	I0805 16:38:25.001282    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem, removing ...
	I0805 16:38:25.001290    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem
	I0805 16:38:25.001522    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.pem (1078 bytes)
	I0805 16:38:25.001775    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem, removing ...
	I0805 16:38:25.001785    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem
	I0805 16:38:25.001843    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/cert.pem (1123 bytes)
	I0805 16:38:25.001990    4650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem, removing ...
	I0805 16:38:25.001994    4650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem
	I0805 16:38:25.002058    4650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1054/.minikube/key.pem (1675 bytes)
	I0805 16:38:25.002201    4650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-596000 san=[127.0.0.1 localhost minikube stopped-upgrade-596000]
	I0805 16:38:25.102466    4650 provision.go:177] copyRemoteCerts
	I0805 16:38:25.102505    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:38:25.102514    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.139082    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:38:25.145648    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 16:38:25.152248    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 16:38:25.159446    4650 provision.go:87] duration metric: took 158.267167ms to configureAuth
	I0805 16:38:25.159455    4650 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:38:25.159551    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:38:25.159589    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.159679    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.159684    4650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:38:25.227268    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:38:25.227276    4650 buildroot.go:70] root file system type: tmpfs
	I0805 16:38:25.227336    4650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:38:25.227375    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.227476    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.227509    4650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:38:25.297968    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:38:25.298029    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.298142    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.298153    4650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:38:25.679368    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:38:25.679381    4650 machine.go:97] duration metric: took 938.583916ms to provisionDockerMachine
	I0805 16:38:25.679388    4650 start.go:293] postStartSetup for "stopped-upgrade-596000" (driver="qemu2")
	I0805 16:38:25.679395    4650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:38:25.679454    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:38:25.679463    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.715984    4650 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:38:25.717395    4650 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 16:38:25.717402    4650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/addons for local assets ...
	I0805 16:38:25.717484    4650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1054/.minikube/files for local assets ...
	I0805 16:38:25.717590    4650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem -> 15512.pem in /etc/ssl/certs
	I0805 16:38:25.717690    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:38:25.720157    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:38:25.727401    4650 start.go:296] duration metric: took 48.008542ms for postStartSetup
	I0805 16:38:25.727413    4650 fix.go:56] duration metric: took 20.849489625s for fixHost
	I0805 16:38:25.727448    4650 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:25.727549    4650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104702a10] 0x104705270 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0805 16:38:25.727556    4650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:38:25.794082    4650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901106.066039962
	
	I0805 16:38:25.794090    4650 fix.go:216] guest clock: 1722901106.066039962
	I0805 16:38:25.794094    4650 fix.go:229] Guest: 2024-08-05 16:38:26.066039962 -0700 PDT Remote: 2024-08-05 16:38:25.727415 -0700 PDT m=+20.977551918 (delta=338.624962ms)
	I0805 16:38:25.794105    4650 fix.go:200] guest clock delta is within tolerance: 338.624962ms
	I0805 16:38:25.794108    4650 start.go:83] releasing machines lock for "stopped-upgrade-596000", held for 20.916194167s
	I0805 16:38:25.794180    4650 ssh_runner.go:195] Run: cat /version.json
	I0805 16:38:25.794180    4650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:38:25.794187    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:38:25.794200    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	W0805 16:38:25.794706    4650 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50468: connect: connection refused
	I0805 16:38:25.794726    4650 retry.go:31] will retry after 362.009177ms: dial tcp [::1]:50468: connect: connection refused
	W0805 16:38:26.193668    4650 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 16:38:26.193731    4650 ssh_runner.go:195] Run: systemctl --version
	I0805 16:38:26.195598    4650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:38:26.197302    4650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:38:26.197330    4650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 16:38:26.200180    4650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 16:38:26.204890    4650 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:38:26.204902    4650 start.go:495] detecting cgroup driver to use...
	I0805 16:38:26.204982    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:26.211920    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 16:38:26.215479    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:38:26.219000    4650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:38:26.219027    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:38:26.222068    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:26.224915    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:38:26.227958    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:26.231417    4650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:38:26.234856    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:38:26.238119    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:38:26.240892    4650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:38:26.243999    4650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:38:26.247285    4650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:38:26.250232    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:26.335066    4650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:38:26.341041    4650 start.go:495] detecting cgroup driver to use...
	I0805 16:38:26.341092    4650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:38:26.348715    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:26.353971    4650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:38:26.360446    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:26.365283    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:26.369580    4650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:38:26.411359    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:26.416868    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:26.422657    4650 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:38:26.423857    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:38:26.426560    4650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:38:26.431420    4650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:38:26.512927    4650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:38:26.590094    4650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:38:26.590155    4650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:38:26.595496    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:26.674171    4650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:38:27.803722    4650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.129558083s)
	I0805 16:38:27.803782    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:38:27.808256    4650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:38:27.814547    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:38:27.818935    4650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:38:27.892426    4650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:38:27.975047    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:28.054977    4650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:38:28.061197    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:38:28.065977    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:28.151634    4650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:38:28.191148    4650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:38:28.191233    4650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:38:28.193372    4650 start.go:563] Will wait 60s for crictl version
	I0805 16:38:28.193397    4650 ssh_runner.go:195] Run: which crictl
	I0805 16:38:28.194707    4650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:38:28.208770    4650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 16:38:28.208842    4650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:38:28.224739    4650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:38:28.243794    4650 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 16:38:28.243857    4650 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 16:38:28.245216    4650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:38:28.248640    4650 kubeadm.go:883] updating cluster {Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 16:38:28.248682    4650 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 16:38:28.248726    4650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:38:28.258927    4650 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:38:28.258949    4650 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:38:28.259000    4650 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:38:28.262196    4650 ssh_runner.go:195] Run: which lz4
	I0805 16:38:28.263420    4650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 16:38:28.264568    4650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:38:28.264577    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 16:38:29.186534    4650 docker.go:649] duration metric: took 923.157792ms to copy over tarball
	I0805 16:38:29.186616    4650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:38:30.355542    4650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168935416s)
	I0805 16:38:30.355555    4650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:38:30.371415    4650 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:38:30.375098    4650 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 16:38:30.380250    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:30.457444    4650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:38:32.066011    4650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608580833s)
	I0805 16:38:32.066104    4650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:38:32.082294    4650 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:38:32.082308    4650 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 16:38:32.082314    4650 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 16:38:32.086312    4650 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.087985    4650 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.089635    4650 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.089912    4650 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.090928    4650 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.091080    4650 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.092468    4650 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.093828    4650 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.093920    4650 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.094189    4650 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.094984    4650 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 16:38:32.095254    4650 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.096216    4650 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.096240    4650 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.097058    4650 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 16:38:32.097703    4650 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.533899    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.548102    4650 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 16:38:32.548124    4650 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.548176    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 16:38:32.549228    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.549758    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.552466    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.558112    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.566379    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 16:38:32.566431    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 16:38:32.570522    4650 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 16:38:32.570541    4650 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.570586    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 16:38:32.580561    4650 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 16:38:32.580582    4650 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 16:38:32.580638    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0805 16:38:32.588811    4650 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 16:38:32.588956    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.593239    4650 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 16:38:32.593254    4650 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 16:38:32.593261    4650 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.593264    4650 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.593314    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 16:38:32.593318    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 16:38:32.593439    4650 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 16:38:32.593450    4650 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 16:38:32.593467    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 16:38:32.620683    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 16:38:32.621983    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 16:38:32.623564    4650 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 16:38:32.623579    4650 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.623621    4650 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 16:38:32.633191    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 16:38:32.633857    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 16:38:32.633970    4650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:38:32.634007    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 16:38:32.634066    4650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 16:38:32.642913    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 16:38:32.642935    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 16:38:32.642945    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 16:38:32.642944    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 16:38:32.643005    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 16:38:32.643098    4650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:38:32.650029    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 16:38:32.650060    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 16:38:32.665197    4650 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 16:38:32.665218    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 16:38:32.705564    4650 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 16:38:32.705684    4650 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.756952    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 16:38:32.761901    4650 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 16:38:32.761913    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 16:38:32.773876    4650 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 16:38:32.773905    4650 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.773971    4650 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:38:32.897425    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 16:38:32.897466    4650 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 16:38:32.897595    4650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:38:32.904979    4650 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 16:38:32.905009    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 16:38:32.980813    4650 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 16:38:32.980827    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 16:38:33.279471    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 16:38:33.279496    4650 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 16:38:33.279507    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 16:38:33.411914    4650 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 16:38:33.411962    4650 cache_images.go:92] duration metric: took 1.329666875s to LoadCachedImages
	W0805 16:38:33.411998    4650 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0805 16:38:33.412003    4650 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 16:38:33.412058    4650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-596000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:38:33.412132    4650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:38:33.425921    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:38:33.425934    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:38:33.425939    4650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:38:33.425948    4650 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-596000 NodeName:stopped-upgrade-596000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:38:33.426012    4650 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-596000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:38:33.426075    4650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 16:38:33.428858    4650 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:38:33.428888    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:38:33.431636    4650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 16:38:33.436816    4650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:38:33.441447    4650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 16:38:33.446322    4650 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 16:38:33.447527    4650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:38:33.451385    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:33.530259    4650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:38:33.537775    4650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000 for IP: 10.0.2.15
	I0805 16:38:33.537784    4650 certs.go:194] generating shared ca certs ...
	I0805 16:38:33.537794    4650 certs.go:226] acquiring lock for ca certs: {Name:mk07f84aa9f3d3ae10a769c730392685ad86b558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.537965    4650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key
	I0805 16:38:33.538000    4650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key
	I0805 16:38:33.538005    4650 certs.go:256] generating profile certs ...
	I0805 16:38:33.538069    4650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key
	I0805 16:38:33.538092    4650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175
	I0805 16:38:33.538100    4650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 16:38:33.714823    4650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 ...
	I0805 16:38:33.714835    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175: {Name:mkc5d234715702d6ad60be3acf11728f83485ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.715115    4650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175 ...
	I0805 16:38:33.715120    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175: {Name:mk1581b20ad59d081720986c583c873b86ece9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.715265    4650 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt.2a635175 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt
	I0805 16:38:33.715405    4650 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key.2a635175 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key
	I0805 16:38:33.715574    4650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.key
	I0805 16:38:33.715704    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem (1338 bytes)
	W0805 16:38:33.715730    4650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551_empty.pem, impossibly tiny 0 bytes
	I0805 16:38:33.715735    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:38:33.715760    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem (1078 bytes)
	I0805 16:38:33.715780    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:38:33.715797    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/key.pem (1675 bytes)
	I0805 16:38:33.715836    4650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem (1708 bytes)
	I0805 16:38:33.716173    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:38:33.723610    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 16:38:33.731038    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:38:33.738654    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:38:33.745911    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 16:38:33.753029    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:38:33.759890    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:38:33.767170    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:38:33.774577    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/ssl/certs/15512.pem --> /usr/share/ca-certificates/15512.pem (1708 bytes)
	I0805 16:38:33.781129    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:38:33.787984    4650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/1551.pem --> /usr/share/ca-certificates/1551.pem (1338 bytes)
	I0805 16:38:33.795226    4650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:38:33.800372    4650 ssh_runner.go:195] Run: openssl version
	I0805 16:38:33.802469    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:38:33.805420    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.806797    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.806817    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:38:33.808743    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:38:33.811917    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1551.pem && ln -fs /usr/share/ca-certificates/1551.pem /etc/ssl/certs/1551.pem"
	I0805 16:38:33.815257    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.816793    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:55 /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.816810    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1551.pem
	I0805 16:38:33.818600    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1551.pem /etc/ssl/certs/51391683.0"
	I0805 16:38:33.821489    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15512.pem && ln -fs /usr/share/ca-certificates/15512.pem /etc/ssl/certs/15512.pem"
	I0805 16:38:33.824462    4650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.825880    4650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:55 /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.825897    4650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15512.pem
	I0805 16:38:33.827607    4650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15512.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:38:33.831001    4650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:38:33.832621    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:38:33.834482    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:38:33.836318    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:38:33.838213    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:38:33.840015    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:38:33.841849    4650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:38:33.843636    4650 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 16:38:33.843706    4650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:38:33.854210    4650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:38:33.857361    4650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:38:33.857367    4650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:38:33.857393    4650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:38:33.860277    4650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:38:33.860569    4650 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-596000" does not appear in /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:38:33.860675    4650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1054/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-596000" cluster setting kubeconfig missing "stopped-upgrade-596000" context setting]
	I0805 16:38:33.860862    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:38:33.861586    4650 kapi.go:59] client config for stopped-upgrade-596000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a97e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:38:33.861919    4650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:38:33.864499    4650 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-596000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 16:38:33.864503    4650 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:38:33.864542    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:38:33.875176    4650 docker.go:483] Stopping containers: [4ac4a306b9cd cb1264009016 671b0bb9cd73 846a2455089c e42b40032b59 e5542e7cf8f0 9dfece4a698f 1ab90127fa79 8d6468f134fc]
	I0805 16:38:33.875247    4650 ssh_runner.go:195] Run: docker stop 4ac4a306b9cd cb1264009016 671b0bb9cd73 846a2455089c e42b40032b59 e5542e7cf8f0 9dfece4a698f 1ab90127fa79 8d6468f134fc
	I0805 16:38:33.886005    4650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:38:33.891359    4650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:38:33.894451    4650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:38:33.894456    4650 kubeadm.go:157] found existing configuration files:
	
	I0805 16:38:33.894476    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0805 16:38:33.896990    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:38:33.897008    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:38:33.899647    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0805 16:38:33.902498    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:38:33.902517    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:38:33.905120    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0805 16:38:33.907625    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:38:33.907656    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:38:33.910631    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0805 16:38:33.913271    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:38:33.913300    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:38:33.915873    4650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:38:33.918769    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:33.941353    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.499681    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.634403    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.663968    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:38:34.693072    4650 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:38:34.693149    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:35.195319    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:35.695185    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:35.699394    4650 api_server.go:72] duration metric: took 1.006343541s to wait for apiserver process to appear ...
	I0805 16:38:35.699404    4650 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:38:35.699413    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:40.701384    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:40.701410    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:45.701533    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:45.701558    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:50.701763    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:50.701804    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:38:55.702170    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:38:55.702208    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:00.702412    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:00.702429    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:05.702910    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:05.702960    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:10.703722    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:10.703748    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:15.703883    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:15.703926    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:20.704868    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:20.704925    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:25.706237    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:25.706282    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:30.707897    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:30.707957    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:35.708395    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:35.708562    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:35.719540    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:35.719620    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:35.729807    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:35.729866    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:35.740105    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:35.740173    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:35.751045    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:35.751116    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:35.761920    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:35.761986    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:35.772087    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:35.772146    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:35.782693    4650 logs.go:276] 0 containers: []
	W0805 16:39:35.782706    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:35.782763    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:35.800920    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:35.800940    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:35.800947    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:35.880869    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:35.880882    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:35.892209    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:35.892220    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:35.931441    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:35.931452    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:35.935955    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:35.935964    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:35.948067    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:35.948078    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:35.966297    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:35.966307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:35.977478    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:35.977488    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:36.003755    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:36.003770    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:36.020544    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:36.020558    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:36.047755    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:36.047763    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:36.060539    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:36.060553    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:36.078297    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:36.078307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:36.100713    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:36.100726    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:36.115648    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:36.115661    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:36.127339    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:36.127350    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:36.141207    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:36.141223    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:38.658460    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:43.660725    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:43.660846    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:43.673054    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:43.673134    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:43.684252    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:43.684321    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:43.694910    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:43.694985    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:43.705640    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:43.705719    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:43.715974    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:43.716043    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:43.733169    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:43.733244    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:43.743838    4650 logs.go:276] 0 containers: []
	W0805 16:39:43.743851    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:43.743908    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:43.754515    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:43.754533    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:43.754539    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:43.792329    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:43.792340    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:43.803990    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:43.804004    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:43.815653    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:43.815664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:43.833552    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:43.833563    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:43.845962    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:43.845974    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:43.850581    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:43.850587    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:43.864614    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:43.864624    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:43.889139    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:43.889150    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:43.903395    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:43.903405    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:43.929193    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:43.929204    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:43.946433    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:43.946443    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:43.986733    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:43.986745    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:44.012708    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:44.012718    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:44.027310    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:44.027321    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:44.039709    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:44.039720    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:44.053531    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:44.053542    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:46.569236    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:51.571534    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:51.571646    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:51.583290    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:51.583370    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:51.594217    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:51.594292    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:51.605087    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:51.605157    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:51.615882    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:51.615951    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:51.626432    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:51.626504    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:51.637048    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:51.637112    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:51.647188    4650 logs.go:276] 0 containers: []
	W0805 16:39:51.647200    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:51.647259    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:51.657907    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:51.657925    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:51.657931    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:51.694844    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:51.694862    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:51.709455    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:51.709466    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:51.722287    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:51.722302    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:51.742336    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:51.742348    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:51.753872    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:51.753883    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:51.766613    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:51.766626    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:51.804684    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:51.804697    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:51.818890    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:51.818900    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:51.843757    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:51.843768    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:39:51.866148    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:51.866159    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:51.870792    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:51.870801    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:51.886454    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:51.886465    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:51.912482    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:51.912494    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:51.924911    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:51.924924    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:51.945882    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:51.945893    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:51.957660    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:51.957671    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:54.474892    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:39:59.477511    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:39:59.477630    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:39:59.490631    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:39:59.490704    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:39:59.502863    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:39:59.502928    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:39:59.513822    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:39:59.513892    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:39:59.525527    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:39:59.525601    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:39:59.538085    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:39:59.538154    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:39:59.548485    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:39:59.548555    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:39:59.559157    4650 logs.go:276] 0 containers: []
	W0805 16:39:59.559168    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:39:59.559222    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:39:59.570613    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:39:59.570631    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:39:59.570636    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:39:59.586971    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:39:59.586984    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:39:59.613383    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:39:59.613394    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:39:59.627462    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:39:59.627472    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:39:59.645438    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:39:59.645449    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:39:59.656494    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:39:59.656506    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:39:59.660602    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:39:59.660609    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:39:59.705911    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:39:59.705923    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:39:59.718519    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:39:59.718532    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:39:59.730207    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:39:59.730218    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:39:59.746115    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:39:59.746126    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:39:59.762048    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:39:59.762059    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:39:59.787813    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:39:59.787824    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:39:59.804265    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:39:59.804276    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:39:59.826204    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:39:59.826215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:39:59.842187    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:39:59.842200    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:39:59.881238    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:39:59.881249    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:02.397700    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:07.399976    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:07.400232    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:07.418651    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:07.418734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:07.432804    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:07.432874    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:07.444542    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:07.444615    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:07.455325    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:07.455400    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:07.465916    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:07.465984    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:07.476195    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:07.476263    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:07.486702    4650 logs.go:276] 0 containers: []
	W0805 16:40:07.486714    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:07.486765    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:07.496935    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:07.496953    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:07.496959    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:07.510721    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:07.510732    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:07.531883    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:07.531895    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:07.551980    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:07.551992    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:07.564374    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:07.564390    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:07.576813    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:07.576828    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:07.592217    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:07.592228    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:07.604044    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:07.604060    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:07.642865    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:07.642878    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:07.647661    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:07.647669    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:07.661815    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:07.661827    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:07.674294    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:07.674306    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:07.699549    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:07.699556    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:07.735107    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:07.735116    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:07.760559    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:07.760570    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:07.772309    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:07.772324    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:07.794234    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:07.794245    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:10.311663    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:15.313914    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:15.314100    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:15.331451    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:15.331521    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:15.346195    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:15.346260    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:15.356744    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:15.356812    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:15.367053    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:15.367122    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:15.377853    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:15.377926    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:15.389364    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:15.389427    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:15.399538    4650 logs.go:276] 0 containers: []
	W0805 16:40:15.399552    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:15.399600    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:15.414453    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:15.414472    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:15.414477    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:15.419335    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:15.419342    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:15.448070    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:15.448080    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:15.468949    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:15.468960    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:15.483645    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:15.483655    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:15.495494    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:15.495506    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:15.507572    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:15.507582    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:15.519773    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:15.519785    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:15.558979    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:15.558990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:15.578486    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:15.578499    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:15.591247    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:15.591261    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:15.616964    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:15.616975    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:15.653173    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:15.653188    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:15.667388    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:15.667400    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:15.684177    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:15.684188    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:15.695587    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:15.695598    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:15.713422    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:15.713434    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:18.225817    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:23.228067    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:23.228300    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:23.254451    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:23.254553    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:23.273166    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:23.273252    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:23.286561    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:23.286643    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:23.298149    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:23.298216    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:23.308490    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:23.308548    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:23.319283    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:23.319347    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:23.329523    4650 logs.go:276] 0 containers: []
	W0805 16:40:23.329534    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:23.329591    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:23.340447    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:23.340468    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:23.340474    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:23.354540    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:23.354550    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:23.379256    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:23.379267    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:23.402254    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:23.402263    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:23.441304    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:23.441313    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:23.452272    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:23.452287    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:23.463564    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:23.463575    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:23.477802    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:23.477812    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:23.491008    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:23.491021    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:23.495656    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:23.495663    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:23.512941    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:23.512956    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:23.531247    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:23.531257    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:23.556249    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:23.556258    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:23.594780    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:23.594793    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:23.609547    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:23.609560    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:23.631317    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:23.631329    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:23.643106    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:23.643118    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:26.164924    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:31.167257    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:31.167466    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:31.187535    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:31.187624    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:31.201701    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:31.201776    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:31.213105    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:31.213177    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:31.223608    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:31.223675    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:31.234298    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:31.234368    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:31.244877    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:31.244950    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:31.254923    4650 logs.go:276] 0 containers: []
	W0805 16:40:31.254934    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:31.254994    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:31.265506    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:31.265523    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:31.265529    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:31.279693    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:31.279703    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:31.308731    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:31.308743    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:31.322770    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:31.322781    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:31.337940    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:31.337953    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:31.349255    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:31.349266    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:31.378371    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:31.378382    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:31.395930    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:31.395939    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:31.420926    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:31.420936    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:31.435330    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:31.435340    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:31.446630    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:31.446641    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:31.469743    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:31.469754    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:31.481327    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:31.481337    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:31.518026    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:31.518036    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:31.522160    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:31.522169    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:31.556659    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:31.556670    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:31.569246    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:31.569257    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:34.084180    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:39.086419    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:39.086520    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:39.097836    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:39.097911    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:39.111108    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:39.111181    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:39.122009    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:39.122074    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:39.137076    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:39.137150    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:39.148068    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:39.148142    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:39.158860    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:39.158933    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:39.169644    4650 logs.go:276] 0 containers: []
	W0805 16:40:39.169655    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:39.169716    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:39.180535    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:39.180551    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:39.180556    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:39.202404    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:39.202415    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:39.213713    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:39.213725    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:39.238913    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:39.238923    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:39.252791    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:39.252801    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:39.264055    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:39.264067    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:39.281792    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:39.281802    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:39.296205    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:39.296218    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:39.308368    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:39.308378    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:39.345299    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:39.345307    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:39.357072    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:39.357082    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:39.368004    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:39.368016    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:39.391349    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:39.391357    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:39.395731    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:39.395742    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:39.430687    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:39.430699    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:39.446547    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:39.446559    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:39.461978    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:39.461989    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:41.976777    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:46.979162    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:46.979591    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:47.015190    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:47.015320    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:47.034797    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:47.034889    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:47.049621    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:47.049701    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:47.061749    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:47.061823    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:47.072401    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:47.072470    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:47.083092    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:47.083162    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:47.093540    4650 logs.go:276] 0 containers: []
	W0805 16:40:47.093554    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:47.093611    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:47.103774    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:47.103820    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:47.103826    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:47.139099    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:47.139113    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:47.151027    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:47.151038    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:47.155843    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:47.155850    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:47.181076    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:47.181086    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:47.195372    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:47.195381    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:47.206764    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:47.206776    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:47.219040    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:47.219050    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:47.259307    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:47.259327    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:47.277935    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:47.277946    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:47.292668    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:47.292677    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:47.304948    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:47.304959    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:47.330231    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:47.330250    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:47.352962    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:47.352977    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:47.371979    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:47.371994    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:47.388678    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:47.388691    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:47.401703    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:47.401715    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:49.916008    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:40:54.918238    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:40:54.918467    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:40:54.941035    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:40:54.941128    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:40:54.957384    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:40:54.957458    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:40:54.969676    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:40:54.969744    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:40:54.980807    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:40:54.980880    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:40:54.991842    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:40:54.991918    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:40:55.002356    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:40:55.002419    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:40:55.013428    4650 logs.go:276] 0 containers: []
	W0805 16:40:55.013443    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:40:55.013507    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:40:55.024657    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:40:55.024676    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:40:55.024682    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:40:55.039545    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:40:55.039558    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:40:55.051642    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:40:55.051654    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:40:55.070222    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:40:55.070232    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:40:55.090639    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:40:55.090652    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:40:55.095939    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:40:55.095949    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:40:55.134140    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:40:55.134153    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:40:55.161190    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:40:55.161209    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:40:55.201965    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:40:55.201990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:40:55.219062    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:40:55.219073    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:40:55.233208    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:40:55.233220    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:40:55.247936    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:40:55.247946    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:40:55.271593    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:40:55.271604    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:40:55.284724    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:40:55.284732    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:40:55.309078    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:40:55.309088    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:40:55.323446    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:40:55.323457    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:40:55.335521    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:40:55.335536    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:40:57.849599    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:02.851754    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:02.851996    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:02.882864    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:02.882964    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:02.900666    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:02.900735    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:02.913447    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:02.913487    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:02.925179    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:02.925218    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:02.936096    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:02.936132    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:02.947524    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:02.947593    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:02.959165    4650 logs.go:276] 0 containers: []
	W0805 16:41:02.959177    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:02.959236    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:02.970551    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:02.970572    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:02.970577    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:02.982792    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:02.982804    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:03.002520    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:03.002532    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:03.015232    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:03.015244    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:03.056155    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:03.056165    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:03.100439    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:03.100453    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:03.112774    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:03.112789    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:03.138143    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:03.138155    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:03.143168    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:03.143177    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:03.157369    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:03.157379    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:03.169694    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:03.169702    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:03.192509    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:03.192525    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:03.208280    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:03.208289    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:03.234991    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:03.235007    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:03.249855    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:03.249866    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:03.264896    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:03.264907    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:03.277335    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:03.277345    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:05.790096    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:10.790796    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:10.790908    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:10.808889    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:10.808941    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:10.822596    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:10.822633    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:10.836659    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:10.836731    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:10.847979    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:10.848039    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:10.859823    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:10.859865    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:10.871497    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:10.871569    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:10.883244    4650 logs.go:276] 0 containers: []
	W0805 16:41:10.883263    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:10.883366    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:10.894763    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:10.894781    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:10.894786    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:10.900203    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:10.900212    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:10.916019    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:10.916035    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:10.947751    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:10.947759    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:10.967558    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:10.967569    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:11.006225    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:11.006244    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:11.020949    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:11.020961    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:11.036962    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:11.036973    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:11.049425    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:11.049437    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:11.076104    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:11.076120    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:11.088727    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:11.088740    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:11.125125    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:11.125136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:11.138869    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:11.138885    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:11.151125    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:11.151136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:11.162685    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:11.162697    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:11.186238    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:11.186247    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:11.198134    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:11.198146    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:13.718385    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:18.720610    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:18.720689    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:18.732327    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:18.732398    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:18.744022    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:18.744086    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:18.759002    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:18.759079    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:18.772267    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:18.772344    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:18.784636    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:18.784712    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:18.796335    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:18.796407    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:18.806824    4650 logs.go:276] 0 containers: []
	W0805 16:41:18.806835    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:18.806899    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:18.818748    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:18.818764    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:18.818771    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:18.841576    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:18.841586    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:18.854511    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:18.854523    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:18.867032    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:18.867044    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:18.881273    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:18.881284    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:18.923137    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:18.923148    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:18.938139    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:18.938148    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:18.953257    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:18.953267    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:18.965678    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:18.965689    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:18.981071    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:18.981086    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:18.992335    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:18.992346    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:19.007453    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:19.007463    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:19.032797    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:19.032809    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:19.037281    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:19.037290    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:19.062113    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:19.062125    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:19.082887    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:19.082899    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:19.122168    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:19.122181    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:21.635633    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:26.637916    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:26.638028    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:26.650309    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:26.650400    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:26.668117    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:26.668191    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:26.680312    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:26.680386    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:26.692689    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:26.692765    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:26.704069    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:26.704139    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:26.715309    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:26.715380    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:26.725846    4650 logs.go:276] 0 containers: []
	W0805 16:41:26.725859    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:26.725921    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:26.736991    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:26.737009    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:26.737014    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:26.749466    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:26.749480    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:26.762437    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:26.762449    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:26.780991    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:26.781007    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:26.793834    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:26.793847    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:26.806863    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:26.806874    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:26.822500    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:26.822516    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:26.839615    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:26.839628    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:26.862124    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:26.862138    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:26.885826    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:26.885834    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:26.927610    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:26.927621    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:26.957172    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:26.957181    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:26.968610    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:26.968624    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:27.007655    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:27.007667    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:27.012017    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:27.012023    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:27.026233    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:27.026248    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:27.041950    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:27.041961    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:29.558974    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:34.561090    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:34.561169    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:34.572566    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:34.572630    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:34.584546    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:34.584609    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:34.596025    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:34.596089    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:34.607394    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:34.607461    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:34.619269    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:34.619345    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:34.630708    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:34.630786    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:34.644241    4650 logs.go:276] 0 containers: []
	W0805 16:41:34.644250    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:34.644309    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:34.661652    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:34.661672    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:34.661678    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:34.700507    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:34.700521    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:34.717589    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:34.717601    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:34.732780    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:34.732794    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:34.772203    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:34.772215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:34.784773    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:34.784784    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:34.806035    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:34.806046    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:34.823661    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:34.823672    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:34.835174    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:34.835186    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:34.846720    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:34.846733    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:34.871430    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:34.871439    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:34.897257    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:34.897268    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:34.911380    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:34.911392    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:34.927559    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:34.927571    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:34.939126    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:34.939137    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:34.943461    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:34.943468    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:34.956936    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:34.956947    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:37.471463    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:42.474012    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:42.474159    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:42.485605    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:42.485674    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:42.496825    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:42.496903    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:42.508531    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:42.508603    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:42.524506    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:42.524581    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:42.536905    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:42.536980    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:42.548632    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:42.548711    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:42.560068    4650 logs.go:276] 0 containers: []
	W0805 16:41:42.560079    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:42.560139    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:42.572112    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:42.572128    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:42.572133    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:42.612863    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:42.612881    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:42.654429    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:42.654440    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:42.680844    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:42.680853    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:42.695967    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:42.695979    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:42.716753    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:42.716764    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:42.728901    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:42.728912    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:42.740428    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:42.740438    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:42.744639    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:42.744645    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:42.759275    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:42.759285    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:42.771336    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:42.771349    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:42.789227    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:42.789238    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:42.804244    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:42.804254    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:42.819457    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:42.819470    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:42.830946    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:42.830957    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:42.842592    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:42.842603    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:42.865375    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:42.865384    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:45.379572    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:50.381717    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:50.381823    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:50.396827    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:50.396900    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:50.412151    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:50.412218    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:50.423255    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:50.423322    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:50.437203    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:50.437276    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:50.448433    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:50.448512    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:50.459841    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:50.459920    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:50.471598    4650 logs.go:276] 0 containers: []
	W0805 16:41:50.471609    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:50.471671    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:50.483560    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:50.483580    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:50.483585    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:50.498916    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:50.498927    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:50.510433    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:50.510445    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:50.522269    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:50.522280    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:50.544151    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:50.544163    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:50.558798    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:50.558812    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:50.581261    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:50.581270    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:50.595251    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:50.595262    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:50.610209    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:50.610220    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:50.622181    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:50.622191    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:50.659566    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:50.659577    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:50.694690    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:50.694701    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:50.719448    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:50.719460    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:50.730841    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:50.730851    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:50.742274    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:50.742287    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:50.746690    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:50.746698    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:50.764853    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:50.764864    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:41:53.279006    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:41:58.279577    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:41:58.279663    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:41:58.290656    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:41:58.290734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:41:58.305811    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:41:58.305880    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:41:58.317364    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:41:58.317443    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:41:58.328688    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:41:58.328764    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:41:58.340356    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:41:58.340436    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:41:58.351932    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:41:58.352000    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:41:58.364682    4650 logs.go:276] 0 containers: []
	W0805 16:41:58.364696    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:41:58.364762    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:41:58.375868    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:41:58.375888    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:41:58.375893    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:41:58.414995    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:41:58.415011    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:41:58.419496    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:41:58.419505    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:41:58.433381    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:41:58.433395    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:41:58.458108    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:41:58.458121    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:41:58.469843    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:41:58.469853    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:41:58.485046    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:41:58.485056    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:41:58.502094    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:41:58.502105    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:41:58.514190    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:41:58.514202    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:41:58.531805    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:41:58.531816    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:41:58.546512    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:41:58.546526    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:41:58.559700    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:41:58.559709    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:41:58.594165    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:41:58.594178    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:41:58.608519    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:41:58.608527    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:41:58.627019    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:41:58.627029    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:41:58.647665    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:41:58.647676    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:41:58.669728    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:41:58.669739    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:01.183906    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:06.186111    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:06.186182    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:06.197515    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:06.197587    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:06.209405    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:06.209474    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:06.221144    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:06.221210    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:06.233035    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:06.233097    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:06.244343    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:06.244406    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:06.255956    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:06.256024    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:06.266507    4650 logs.go:276] 0 containers: []
	W0805 16:42:06.266520    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:06.266577    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:06.277218    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:06.277238    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:06.277243    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:06.314792    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:06.314803    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:06.339652    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:06.339664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:06.351088    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:06.351101    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:06.362827    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:06.362840    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:06.376637    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:06.376648    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:06.388419    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:06.388434    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:06.403549    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:06.403561    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:06.415323    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:06.415334    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:06.432931    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:06.432942    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:06.444223    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:06.444236    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:06.466088    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:06.466103    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:06.503901    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:06.503914    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:06.517834    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:06.517846    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:06.533737    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:06.533748    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:06.546455    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:06.546466    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:06.550837    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:06.550846    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:09.074462    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:14.076641    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:14.076734    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:14.094239    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:14.094316    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:14.105703    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:14.105783    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:14.116500    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:14.116557    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:14.126983    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:14.127050    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:14.142012    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:14.142086    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:14.152513    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:14.152577    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:14.162901    4650 logs.go:276] 0 containers: []
	W0805 16:42:14.162916    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:14.162968    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:14.178025    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:14.178044    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:14.178051    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:14.218008    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:14.218019    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:14.254324    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:14.254335    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:14.268663    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:14.268674    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:14.280891    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:14.280904    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:14.292875    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:14.292887    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:14.304893    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:14.304906    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:14.311488    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:14.311500    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:14.336404    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:14.336419    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:14.362533    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:14.362543    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:14.373840    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:14.373851    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:14.398164    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:14.398177    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:14.412107    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:14.412117    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:14.434799    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:14.434808    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:14.446144    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:14.446156    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:14.467603    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:14.467613    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:14.484063    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:14.484074    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:16.998387    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:22.000624    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:22.000739    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:22.011551    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:22.011622    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:22.022162    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:22.022221    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:22.033576    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:22.033636    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:22.044019    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:22.044082    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:22.058995    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:22.059058    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:22.072650    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:22.072725    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:22.082497    4650 logs.go:276] 0 containers: []
	W0805 16:42:22.082510    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:22.082557    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:22.095988    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:22.096003    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:22.096010    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:22.107950    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:22.107960    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:22.119523    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:22.119535    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:22.131165    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:22.131174    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:22.164711    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:22.164725    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:22.186093    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:22.186104    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:22.203121    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:22.203130    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:22.214275    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:22.214289    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:22.229206    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:22.229215    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:22.240310    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:22.240322    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:22.258411    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:22.258424    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:22.282307    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:22.282323    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:22.295051    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:22.295066    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:22.333364    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:22.333381    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:22.337531    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:22.337540    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:22.366618    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:22.366637    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:22.396571    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:22.396586    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:24.913075    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:29.915186    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:29.915292    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:42:29.930347    4650 logs.go:276] 2 containers: [0b63e308c0f5 4ac4a306b9cd]
	I0805 16:42:29.930421    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:42:29.942115    4650 logs.go:276] 2 containers: [2a86e292f226 846a2455089c]
	I0805 16:42:29.942184    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:42:29.953061    4650 logs.go:276] 1 containers: [232b5973da55]
	I0805 16:42:29.953131    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:42:29.963834    4650 logs.go:276] 2 containers: [c2ff009715c3 9dfece4a698f]
	I0805 16:42:29.963899    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:42:29.975478    4650 logs.go:276] 1 containers: [ab36bdbff57a]
	I0805 16:42:29.975543    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:42:29.986698    4650 logs.go:276] 2 containers: [04d3efb2675a 671b0bb9cd73]
	I0805 16:42:29.986766    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:42:29.997261    4650 logs.go:276] 0 containers: []
	W0805 16:42:29.997272    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:42:29.997327    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:42:30.007625    4650 logs.go:276] 2 containers: [9285354cc4cf b33601027326]
	I0805 16:42:30.007644    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:42:30.007649    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:42:30.047309    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:42:30.047319    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:42:30.051455    4650 logs.go:123] Gathering logs for kube-scheduler [c2ff009715c3] ...
	I0805 16:42:30.051462    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ff009715c3"
	I0805 16:42:30.064737    4650 logs.go:123] Gathering logs for kube-controller-manager [671b0bb9cd73] ...
	I0805 16:42:30.064752    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 671b0bb9cd73"
	I0805 16:42:30.079695    4650 logs.go:123] Gathering logs for kube-proxy [ab36bdbff57a] ...
	I0805 16:42:30.079706    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab36bdbff57a"
	I0805 16:42:30.091746    4650 logs.go:123] Gathering logs for storage-provisioner [b33601027326] ...
	I0805 16:42:30.091757    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b33601027326"
	I0805 16:42:30.102980    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:42:30.102991    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:42:30.124675    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:42:30.124683    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:42:30.136937    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:42:30.136947    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:42:30.172040    4650 logs.go:123] Gathering logs for kube-apiserver [0b63e308c0f5] ...
	I0805 16:42:30.172052    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b63e308c0f5"
	I0805 16:42:30.186177    4650 logs.go:123] Gathering logs for kube-apiserver [4ac4a306b9cd] ...
	I0805 16:42:30.186187    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac4a306b9cd"
	I0805 16:42:30.211125    4650 logs.go:123] Gathering logs for etcd [846a2455089c] ...
	I0805 16:42:30.211136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846a2455089c"
	I0805 16:42:30.226431    4650 logs.go:123] Gathering logs for kube-controller-manager [04d3efb2675a] ...
	I0805 16:42:30.226443    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d3efb2675a"
	I0805 16:42:30.244483    4650 logs.go:123] Gathering logs for etcd [2a86e292f226] ...
	I0805 16:42:30.244492    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a86e292f226"
	I0805 16:42:30.259236    4650 logs.go:123] Gathering logs for coredns [232b5973da55] ...
	I0805 16:42:30.259247    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 232b5973da55"
	I0805 16:42:30.270459    4650 logs.go:123] Gathering logs for kube-scheduler [9dfece4a698f] ...
	I0805 16:42:30.270471    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfece4a698f"
	I0805 16:42:30.291782    4650 logs.go:123] Gathering logs for storage-provisioner [9285354cc4cf] ...
	I0805 16:42:30.291793    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9285354cc4cf"
	I0805 16:42:32.805470    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:37.807760    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:37.807828    4650 kubeadm.go:597] duration metric: took 4m3.955374s to restartPrimaryControlPlane
	W0805 16:42:37.807866    4650 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 16:42:37.807885    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 16:42:38.773604    4650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:42:38.778837    4650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:42:38.781641    4650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:42:38.784565    4650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:42:38.784572    4650 kubeadm.go:157] found existing configuration files:
	
	I0805 16:42:38.784593    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0805 16:42:38.787206    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:42:38.787234    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:42:38.790264    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0805 16:42:38.793516    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:42:38.793537    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:42:38.797044    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0805 16:42:38.799996    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:42:38.800016    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:42:38.802644    4650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0805 16:42:38.805501    4650 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:42:38.805521    4650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:42:38.808602    4650 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:42:38.824282    4650 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 16:42:38.824322    4650 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:42:38.875973    4650 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:42:38.876031    4650 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:42:38.876089    4650 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:42:38.924234    4650 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:42:38.932383    4650 out.go:204]   - Generating certificates and keys ...
	I0805 16:42:38.932423    4650 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:42:38.932460    4650 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:42:38.932497    4650 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:42:38.932528    4650 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 16:42:38.932569    4650 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:42:38.932598    4650 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 16:42:38.932631    4650 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 16:42:38.932661    4650 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:42:38.932721    4650 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:42:38.932773    4650 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:42:38.932796    4650 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 16:42:38.932840    4650 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:42:39.074663    4650 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:42:39.220547    4650 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:42:39.320055    4650 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:42:39.414354    4650 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:42:39.443240    4650 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:42:39.443655    4650 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:42:39.443680    4650 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:42:39.527342    4650 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:42:39.530647    4650 out.go:204]   - Booting up control plane ...
	I0805 16:42:39.530722    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:42:39.530767    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:42:39.530821    4650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:42:39.530863    4650 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:42:39.530969    4650 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 16:42:44.032737    4650 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506058 seconds
	I0805 16:42:44.032811    4650 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:42:44.036887    4650 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:42:44.545529    4650 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:42:44.545638    4650 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-596000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:42:45.049998    4650 kubeadm.go:310] [bootstrap-token] Using token: bx3rbc.9i1vtplwmfu92vdl
	I0805 16:42:45.056431    4650 out.go:204]   - Configuring RBAC rules ...
	I0805 16:42:45.056497    4650 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:42:45.056540    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:42:45.058412    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:42:45.062835    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:42:45.063808    4650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:42:45.064710    4650 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:42:45.068098    4650 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:42:45.253608    4650 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:42:45.454448    4650 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:42:45.454808    4650 kubeadm.go:310] 
	I0805 16:42:45.454860    4650 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:42:45.454867    4650 kubeadm.go:310] 
	I0805 16:42:45.454906    4650 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:42:45.454911    4650 kubeadm.go:310] 
	I0805 16:42:45.454927    4650 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:42:45.454982    4650 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:42:45.455017    4650 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:42:45.455022    4650 kubeadm.go:310] 
	I0805 16:42:45.455048    4650 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:42:45.455052    4650 kubeadm.go:310] 
	I0805 16:42:45.455074    4650 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:42:45.455077    4650 kubeadm.go:310] 
	I0805 16:42:45.455101    4650 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:42:45.455143    4650 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:42:45.455182    4650 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:42:45.455185    4650 kubeadm.go:310] 
	I0805 16:42:45.455237    4650 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:42:45.455278    4650 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:42:45.455281    4650 kubeadm.go:310] 
	I0805 16:42:45.455329    4650 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bx3rbc.9i1vtplwmfu92vdl \
	I0805 16:42:45.455391    4650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 \
	I0805 16:42:45.455408    4650 kubeadm.go:310] 	--control-plane 
	I0805 16:42:45.455413    4650 kubeadm.go:310] 
	I0805 16:42:45.455455    4650 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:42:45.455457    4650 kubeadm.go:310] 
	I0805 16:42:45.455504    4650 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bx3rbc.9i1vtplwmfu92vdl \
	I0805 16:42:45.455555    4650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7280cf86517627a1b2e8b1aa5e2d30adc1efda7485123a11788055778cfe70b7 
	I0805 16:42:45.455671    4650 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:42:45.455682    4650 cni.go:84] Creating CNI manager for ""
	I0805 16:42:45.455689    4650 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:42:45.459823    4650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 16:42:45.467687    4650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 16:42:45.470735    4650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 16:42:45.475816    4650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:42:45.475907    4650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-596000 minikube.k8s.io/updated_at=2024_08_05T16_42_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=stopped-upgrade-596000 minikube.k8s.io/primary=true
	I0805 16:42:45.475946    4650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:42:45.479198    4650 ops.go:34] apiserver oom_adj: -16
	I0805 16:42:45.530771    4650 kubeadm.go:1113] duration metric: took 54.919625ms to wait for elevateKubeSystemPrivileges
	I0805 16:42:45.530871    4650 kubeadm.go:394] duration metric: took 4m11.692310292s to StartCluster
	I0805 16:42:45.530887    4650 settings.go:142] acquiring lock: {Name:mk8f45924d83b23294fe6a7ba250768dbca87de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:42:45.530998    4650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:42:45.531466    4650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/kubeconfig: {Name:mk0db307fdf97cd8e18f7fd35d350a5523a32e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:42:45.531679    4650 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:42:45.531718    4650 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:42:45.531755    4650 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-596000"
	I0805 16:42:45.531768    4650 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-596000"
	W0805 16:42:45.531774    4650 addons.go:243] addon storage-provisioner should already be in state true
	I0805 16:42:45.531787    4650 host.go:66] Checking if "stopped-upgrade-596000" exists ...
	I0805 16:42:45.531783    4650 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-596000"
	I0805 16:42:45.531807    4650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-596000"
	I0805 16:42:45.531827    4650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:42:45.532832    4650 kapi.go:59] client config for stopped-upgrade-596000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/stopped-upgrade-596000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a97e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:42:45.532951    4650 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-596000"
	W0805 16:42:45.532956    4650 addons.go:243] addon default-storageclass should already be in state true
	I0805 16:42:45.532963    4650 host.go:66] Checking if "stopped-upgrade-596000" exists ...
	I0805 16:42:45.535871    4650 out.go:177] * Verifying Kubernetes components...
	I0805 16:42:45.536196    4650 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:42:45.539878    4650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:42:45.539886    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:42:45.543786    4650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:42:45.546779    4650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:42:45.550796    4650 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:42:45.550802    4650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:42:45.550808    4650 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/stopped-upgrade-596000/id_rsa Username:docker}
	I0805 16:42:45.641943    4650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:42:45.647287    4650 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:42:45.647330    4650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:42:45.651126    4650 api_server.go:72] duration metric: took 119.437417ms to wait for apiserver process to appear ...
	I0805 16:42:45.651135    4650 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:42:45.651143    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:45.707452    4650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:42:45.720450    4650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:42:50.653207    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:50.653264    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:42:55.653587    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:42:55.653617    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:00.653896    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:00.653939    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:05.654410    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:05.654462    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:10.655039    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:10.655080    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:15.655845    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:15.655904    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 16:43:16.029602    4650 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 16:43:16.034895    4650 out.go:177] * Enabled addons: storage-provisioner
	I0805 16:43:16.042772    4650 addons.go:510] duration metric: took 30.511670625s for enable addons: enabled=[storage-provisioner]
	I0805 16:43:20.656603    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:20.656642    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:25.657835    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:25.657864    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:30.659577    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:30.659599    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:35.661549    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:35.661593    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:40.663748    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:40.663791    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:45.665977    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:45.666130    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:45.676490    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:43:45.676562    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:45.687093    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:43:45.687163    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:45.697439    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:43:45.697503    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:45.711878    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:43:45.711949    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:45.722576    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:43:45.722642    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:45.733495    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:43:45.733567    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:45.743410    4650 logs.go:276] 0 containers: []
	W0805 16:43:45.743422    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:45.743475    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:45.757888    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:43:45.757903    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:43:45.757910    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:43:45.773265    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:43:45.773278    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:43:45.789903    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:43:45.789914    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:43:45.807720    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:43:45.807731    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:43:45.818964    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:43:45.818974    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:45.831826    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:45.831837    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:45.866040    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:45.866049    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:45.900662    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:43:45.900677    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:43:45.918385    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:43:45.918400    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:43:45.930494    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:43:45.930506    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:43:45.942449    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:43:45.942460    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:43:45.956231    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:45.956242    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:45.980353    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:45.980364    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:48.487069    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:43:53.489378    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:43:53.489832    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:43:53.527304    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:43:53.527434    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:43:53.556013    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:43:53.556111    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:43:53.574753    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:43:53.574820    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:43:53.586104    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:43:53.586166    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:43:53.596856    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:43:53.596922    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:43:53.607355    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:43:53.607428    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:43:53.618741    4650 logs.go:276] 0 containers: []
	W0805 16:43:53.618753    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:43:53.618811    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:43:53.628738    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:43:53.628755    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:43:53.628760    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:43:53.644183    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:43:53.644194    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:43:53.661280    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:43:53.661290    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:43:53.672686    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:43:53.672700    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:43:53.706912    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:43:53.706920    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:43:53.711170    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:43:53.711177    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:43:53.747597    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:43:53.747609    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:43:53.761307    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:43:53.761321    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:43:53.773347    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:43:53.773360    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:43:53.787914    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:43:53.787927    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:43:53.800720    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:43:53.800732    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:43:53.814157    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:43:53.814172    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:43:53.827389    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:43:53.827402    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:43:56.352741    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:01.355390    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:01.355749    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:01.389342    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:01.389457    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:01.408001    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:01.408082    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:01.423588    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:01.423652    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:01.438059    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:01.438133    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:01.448940    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:01.449026    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:01.459916    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:01.459982    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:01.470356    4650 logs.go:276] 0 containers: []
	W0805 16:44:01.470368    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:01.470448    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:01.481034    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:01.481048    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:01.481054    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:01.515720    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:01.515727    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:01.532067    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:01.532079    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:01.550266    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:01.550280    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:01.562715    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:01.562728    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:01.587281    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:01.587288    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:01.599361    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:01.599374    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:01.603537    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:01.603543    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:01.640443    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:01.640455    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:01.652324    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:01.652334    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:01.667312    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:01.667326    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:01.678587    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:01.678601    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:01.696786    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:01.696797    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:04.210167    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:09.212498    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:09.212595    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:09.225627    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:09.225702    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:09.236569    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:09.236638    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:09.246799    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:09.246862    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:09.257280    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:09.257342    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:09.267938    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:09.268012    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:09.278645    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:09.278700    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:09.289326    4650 logs.go:276] 0 containers: []
	W0805 16:44:09.289337    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:09.289383    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:09.299293    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:09.299310    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:09.299317    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:09.314052    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:09.314063    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:09.331492    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:09.331506    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:09.343449    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:09.343461    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:09.356717    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:09.356725    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:09.381844    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:09.381855    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:09.399973    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:09.399983    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:09.423630    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:09.423638    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:09.456283    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:09.456292    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:09.461510    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:09.461518    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:09.495720    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:09.495731    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:09.507010    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:09.507020    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:09.518480    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:09.518489    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:12.033776    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:17.035674    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:17.036074    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:17.071638    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:17.071781    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:17.091507    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:17.091611    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:17.107013    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:17.107088    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:17.123649    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:17.123721    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:17.134284    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:17.134353    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:17.145204    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:17.145267    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:17.155421    4650 logs.go:276] 0 containers: []
	W0805 16:44:17.155431    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:17.155490    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:17.165844    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:17.165859    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:17.165864    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:17.177846    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:17.177854    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:17.189505    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:17.189516    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:17.204497    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:17.204509    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:17.216922    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:17.216933    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:17.228357    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:17.228371    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:17.232795    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:17.232803    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:17.246961    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:17.246973    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:17.261109    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:17.261122    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:17.285904    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:17.285911    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:17.296926    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:17.296940    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:17.330365    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:17.330382    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:17.368501    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:17.368512    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:19.888426    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:24.890576    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:24.891002    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:24.936858    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:24.936974    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:24.960464    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:24.960532    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:24.974577    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:24.974654    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:24.987619    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:24.987685    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:24.998748    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:24.998810    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:25.009764    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:25.009826    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:25.020677    4650 logs.go:276] 0 containers: []
	W0805 16:44:25.020687    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:25.020737    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:25.031954    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:25.031971    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:25.031976    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:25.067367    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:25.067377    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:25.103411    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:25.103423    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:25.118668    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:25.118681    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:25.133274    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:25.133287    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:25.149070    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:25.149082    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:25.166823    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:25.166833    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:25.178992    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:25.179001    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:25.183435    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:25.183443    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:25.196063    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:25.196073    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:25.208513    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:25.208525    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:25.220826    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:25.220838    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:25.245848    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:25.245855    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:27.759344    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:32.761747    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:32.762121    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:32.802706    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:32.802844    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:32.824959    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:32.825065    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:32.841139    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:32.841208    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:32.853952    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:32.854018    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:32.865214    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:32.865269    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:32.877385    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:32.877450    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:32.887623    4650 logs.go:276] 0 containers: []
	W0805 16:44:32.887634    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:32.887683    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:32.898389    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:32.898406    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:32.898411    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:32.930732    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:32.930742    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:32.968082    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:32.968092    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:32.982616    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:32.982625    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:33.001090    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:33.001102    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:33.013550    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:33.013560    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:33.038003    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:33.038017    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:33.042469    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:33.042477    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:33.075812    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:33.075822    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:33.091370    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:33.091380    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:33.103306    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:33.103319    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:33.118731    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:33.118744    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:33.131159    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:33.131173    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:35.645869    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:40.648099    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:40.648654    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:40.676701    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:40.676839    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:40.695711    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:40.695787    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:40.711334    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:40.711408    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:40.723033    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:40.723098    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:40.733885    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:40.733960    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:40.745015    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:40.745091    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:40.755871    4650 logs.go:276] 0 containers: []
	W0805 16:44:40.755880    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:40.755924    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:40.766953    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:40.766971    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:40.766975    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:40.779093    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:40.779104    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:40.791340    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:40.791350    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:40.806640    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:40.806652    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:40.824747    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:40.824758    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:40.859635    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:40.859647    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:40.864439    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:40.864448    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:40.899382    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:40.899392    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:40.914627    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:40.914639    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:40.926560    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:40.926570    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:40.951686    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:40.951699    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:40.967977    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:40.967990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:40.983835    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:40.983849    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:43.498245    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:48.500973    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:48.501388    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:48.544896    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:48.545033    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:48.564575    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:48.564664    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:48.579179    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:48.579239    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:48.596409    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:48.596478    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:48.607561    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:48.607629    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:48.618448    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:48.618516    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:48.628886    4650 logs.go:276] 0 containers: []
	W0805 16:44:48.628903    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:48.628951    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:48.639167    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:48.639185    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:48.639190    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:48.643747    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:48.643753    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:48.680654    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:48.680664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:48.694289    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:48.694300    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:48.717409    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:48.717416    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:48.735162    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:48.735174    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:48.746162    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:48.746175    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:48.779039    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:48.779047    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:48.793717    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:48.793727    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:48.805425    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:48.805435    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:48.816823    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:48.816837    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:48.832662    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:48.832675    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:48.844722    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:48.844732    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:51.358384    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:44:56.360939    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:44:56.361381    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:44:56.401175    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:44:56.401302    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:44:56.423681    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:44:56.423789    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:44:56.446389    4650 logs.go:276] 2 containers: [c2937c496a6e f988cb366aa8]
	I0805 16:44:56.446463    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:44:56.458115    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:44:56.458192    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:44:56.468628    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:44:56.468696    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:44:56.478828    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:44:56.478897    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:44:56.488586    4650 logs.go:276] 0 containers: []
	W0805 16:44:56.488600    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:44:56.488656    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:44:56.499333    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:44:56.499350    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:44:56.499357    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:44:56.512324    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:44:56.512334    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:44:56.533503    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:44:56.533517    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:44:56.576889    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:44:56.576908    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:44:56.601980    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:44:56.601995    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:44:56.633119    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:44:56.633136    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:44:56.646580    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:44:56.646592    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:44:56.670643    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:44:56.670659    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:44:56.694206    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:44:56.694218    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:44:56.732604    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:44:56.732617    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:44:56.736910    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:44:56.736920    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:44:56.771014    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:44:56.771024    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:44:56.785541    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:44:56.785555    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:44:59.312487    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:04.315170    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:04.315578    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:04.355783    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:04.355918    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:04.377952    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:04.378050    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:04.392919    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:04.392992    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:04.405971    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:04.406040    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:04.417298    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:04.417363    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:04.427970    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:04.428028    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:04.438705    4650 logs.go:276] 0 containers: []
	W0805 16:45:04.438716    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:04.438766    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:04.449568    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:04.449588    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:04.449593    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:04.464265    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:04.464278    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:04.468819    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:04.468828    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:04.484097    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:04.484108    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:04.501394    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:04.501404    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:04.513146    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:04.513160    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:04.546932    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:04.546944    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:04.558569    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:04.558580    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:04.570753    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:04.570763    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:04.589011    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:04.589024    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:04.600625    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:04.600638    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:04.636273    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:04.636286    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:04.647250    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:04.647263    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:04.659322    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:04.659333    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:04.674975    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:04.674989    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:07.202414    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:12.204894    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:12.204963    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:12.215868    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:12.215944    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:12.226531    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:12.226595    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:12.245685    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:12.245760    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:12.256723    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:12.256803    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:12.271222    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:12.271310    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:12.282065    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:12.282129    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:12.291527    4650 logs.go:276] 0 containers: []
	W0805 16:45:12.291544    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:12.291596    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:12.302308    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:12.302325    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:12.302330    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:12.313950    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:12.313964    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:12.325458    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:12.325472    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:12.349072    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:12.349082    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:12.383444    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:12.383456    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:12.387989    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:12.387998    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:12.399650    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:12.399664    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:12.411190    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:12.411200    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:12.428973    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:12.428986    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:12.445128    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:12.445139    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:12.456729    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:12.456740    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:12.477497    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:12.477509    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:12.489734    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:12.489747    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:12.525285    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:12.525302    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:12.541151    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:12.541163    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:15.062771    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:20.065592    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:20.065999    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:20.103868    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:20.104001    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:20.126203    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:20.126315    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:20.142082    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:20.142162    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:20.154915    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:20.154984    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:20.165928    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:20.165993    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:20.176604    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:20.176677    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:20.187428    4650 logs.go:276] 0 containers: []
	W0805 16:45:20.187440    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:20.187500    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:20.197952    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:20.197969    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:20.197974    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:20.212152    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:20.212162    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:20.223852    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:20.223863    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:20.236081    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:20.236092    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:20.270659    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:20.270672    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:20.292040    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:20.292051    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:20.307216    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:20.307226    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:20.325830    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:20.325842    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:20.359778    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:20.359789    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:20.371770    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:20.371783    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:20.383602    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:20.383612    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:20.395868    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:20.395877    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:20.419652    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:20.419661    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:20.424189    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:20.424198    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:20.436340    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:20.436351    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:22.950521    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:27.952665    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:27.953054    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:27.991483    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:27.991608    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:28.010079    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:28.010162    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:28.024080    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:28.024151    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:28.036232    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:28.036294    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:28.046961    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:28.047017    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:28.058331    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:28.058398    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:28.068242    4650 logs.go:276] 0 containers: []
	W0805 16:45:28.068257    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:28.068317    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:28.083959    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:28.083978    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:28.083984    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:28.095579    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:28.095590    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:28.111182    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:28.111193    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:28.128413    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:28.128424    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:28.142686    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:28.142695    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:28.159178    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:28.159189    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:28.171574    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:28.171585    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:28.186742    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:28.186756    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:28.198463    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:28.198475    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:28.234046    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:28.234057    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:28.238728    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:28.238735    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:28.260453    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:28.260465    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:28.285618    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:28.285624    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:28.319331    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:28.319336    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:28.330924    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:28.330935    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:30.847101    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:35.849373    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:35.849457    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:35.861258    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:35.861314    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:35.872088    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:35.872144    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:35.884505    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:35.884573    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:35.896929    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:35.896988    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:35.908395    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:35.908454    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:35.919253    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:35.919312    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:35.930354    4650 logs.go:276] 0 containers: []
	W0805 16:45:35.930366    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:35.930420    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:35.942703    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:35.942718    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:35.942723    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:35.956678    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:35.956691    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:35.974182    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:35.974197    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:35.989763    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:35.989771    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:36.006835    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:36.006849    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:36.020122    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:36.020135    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:36.033293    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:36.033303    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:36.045115    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:36.045128    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:36.064819    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:36.064839    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:36.091403    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:36.091420    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:36.104650    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:36.104660    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:36.142006    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:36.142026    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:36.156314    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:36.156328    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:36.172580    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:36.172589    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:36.178258    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:36.178269    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:38.717558    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:43.717838    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:43.717929    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:43.730964    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:43.731046    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:43.745123    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:43.745227    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:43.757508    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:43.757582    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:43.769903    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:43.769971    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:43.782653    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:43.782722    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:43.794946    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:43.795025    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:43.806909    4650 logs.go:276] 0 containers: []
	W0805 16:45:43.806921    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:43.806982    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:43.823436    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:43.823457    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:43.823463    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:43.868875    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:43.868896    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:43.881713    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:43.881722    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:43.893422    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:43.893434    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:43.908038    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:43.908053    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:43.932610    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:43.932617    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:43.943980    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:43.943990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:43.964577    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:43.964587    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:43.969428    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:43.969436    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:43.982570    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:43.982579    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:43.997995    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:43.998004    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:44.010035    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:44.010048    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:44.033596    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:44.033607    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:44.067902    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:44.067910    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:44.081661    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:44.081674    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:46.593630    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:51.595760    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:51.595983    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:51.619482    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:51.619598    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:51.638084    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:51.638158    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:51.650363    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:51.650426    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:51.660937    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:51.661002    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:51.671342    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:51.671408    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:51.681510    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:51.681576    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:51.697385    4650 logs.go:276] 0 containers: []
	W0805 16:45:51.697396    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:51.697451    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:51.707843    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:51.707865    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:51.707870    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:51.720206    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:51.720219    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:51.732121    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:51.732134    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:45:51.766960    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:51.766969    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:51.771463    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:51.771472    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:51.805518    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:51.805531    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:51.817945    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:51.817958    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:51.831938    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:51.831948    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:51.843242    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:51.843255    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:51.854423    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:51.854434    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:51.867730    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:51.867743    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:51.881538    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:51.881551    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:51.897117    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:51.897129    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:51.914415    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:51.914426    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:51.938577    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:51.938589    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:54.453959    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:45:59.456597    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:45:59.456687    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:45:59.467723    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:45:59.467774    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:45:59.478618    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:45:59.478677    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:45:59.490170    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:45:59.490249    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:45:59.507898    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:45:59.507948    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:45:59.519527    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:45:59.519609    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:45:59.535070    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:45:59.535142    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:45:59.549354    4650 logs.go:276] 0 containers: []
	W0805 16:45:59.549365    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:45:59.549416    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:45:59.561931    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:45:59.561949    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:45:59.561954    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:45:59.602931    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:45:59.602943    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:45:59.619389    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:45:59.619397    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:45:59.637513    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:45:59.637524    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:45:59.663843    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:45:59.663853    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:45:59.676228    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:45:59.676240    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:45:59.695180    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:45:59.695190    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:45:59.708244    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:45:59.708254    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:45:59.725291    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:45:59.725302    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:45:59.741316    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:45:59.741327    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:45:59.745994    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:45:59.746002    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:45:59.758083    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:45:59.758096    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:45:59.770441    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:45:59.770452    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:45:59.788853    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:45:59.788861    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:45:59.800938    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:45:59.800951    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:02.340137    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:07.342865    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:07.343260    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:46:07.382520    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:46:07.382645    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:46:07.401363    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:46:07.401455    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:46:07.415695    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:46:07.415766    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:46:07.427844    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:46:07.427900    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:46:07.438218    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:46:07.438283    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:46:07.448338    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:46:07.448406    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:46:07.458048    4650 logs.go:276] 0 containers: []
	W0805 16:46:07.458060    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:46:07.458107    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:46:07.467949    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:46:07.467965    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:46:07.467969    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:46:07.484560    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:46:07.484572    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:46:07.496597    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:46:07.496609    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:46:07.512709    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:46:07.512722    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:46:07.527178    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:46:07.527191    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:46:07.539466    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:46:07.539476    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:46:07.551923    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:46:07.551936    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:46:07.556256    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:46:07.556265    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:46:07.589088    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:46:07.589102    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:46:07.600807    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:46:07.600820    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:46:07.612087    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:46:07.612100    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:46:07.623559    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:46:07.623570    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:46:07.645395    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:46:07.645408    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:46:07.656711    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:46:07.656722    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:46:07.681317    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:46:07.681328    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:10.217553    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:15.218077    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:15.218332    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:46:15.245082    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:46:15.245200    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:46:15.260852    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:46:15.260934    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:46:15.274614    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:46:15.274689    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:46:15.285871    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:46:15.285943    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:46:15.296837    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:46:15.296900    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:46:15.307468    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:46:15.307539    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:46:15.321729    4650 logs.go:276] 0 containers: []
	W0805 16:46:15.321742    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:46:15.321796    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:46:15.332396    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:46:15.332413    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:46:15.332420    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:46:15.346409    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:46:15.346421    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:46:15.364228    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:46:15.364240    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:46:15.375945    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:46:15.375959    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:46:15.400360    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:46:15.400366    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:46:15.411737    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:46:15.411750    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:46:15.424091    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:46:15.424106    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:46:15.436235    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:46:15.436250    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:46:15.451610    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:46:15.451620    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:46:15.456027    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:46:15.456034    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:46:15.470500    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:46:15.470514    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:46:15.481794    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:46:15.481807    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:46:15.492844    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:46:15.492857    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:46:15.504364    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:46:15.504376    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:15.538875    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:46:15.538886    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:46:18.076116    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:23.078852    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:23.078919    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:46:23.090505    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:46:23.090560    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:46:23.102648    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:46:23.102703    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:46:23.115079    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:46:23.115139    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:46:23.126459    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:46:23.126518    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:46:23.138316    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:46:23.138386    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:46:23.151032    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:46:23.151079    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:46:23.161573    4650 logs.go:276] 0 containers: []
	W0805 16:46:23.161581    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:46:23.161633    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:46:23.172612    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:46:23.172628    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:46:23.172633    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:46:23.190471    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:46:23.190483    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:46:23.206319    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:46:23.206327    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:23.241861    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:46:23.241882    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:46:23.280968    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:46:23.280982    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:46:23.302257    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:46:23.302268    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:46:23.326763    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:46:23.326778    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:46:23.343338    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:46:23.343350    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:46:23.362678    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:46:23.362686    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:46:23.374852    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:46:23.374865    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:46:23.391354    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:46:23.391367    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:46:23.404302    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:46:23.404310    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:46:23.416409    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:46:23.416423    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:46:23.434718    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:46:23.434730    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:46:23.439768    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:46:23.439780    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:46:25.957347    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:30.958538    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:30.958950    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:46:30.997571    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:46:30.997690    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:46:31.016269    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:46:31.016358    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:46:31.030471    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:46:31.030562    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:46:31.042496    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:46:31.042566    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:46:31.052951    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:46:31.053018    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:46:31.063981    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:46:31.064048    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:46:31.074601    4650 logs.go:276] 0 containers: []
	W0805 16:46:31.074611    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:46:31.074666    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:46:31.085885    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:46:31.085903    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:46:31.085908    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:46:31.097532    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:46:31.097543    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:46:31.115689    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:46:31.115698    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:46:31.128094    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:46:31.128108    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:46:31.140555    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:46:31.140568    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:31.172690    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:46:31.172696    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:46:31.207542    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:46:31.207556    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:46:31.223742    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:46:31.223753    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:46:31.227973    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:46:31.227979    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:46:31.246495    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:46:31.246509    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:46:31.261255    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:46:31.261265    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:46:31.274507    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:46:31.274518    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:46:31.299241    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:46:31.299252    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:46:31.311170    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:46:31.311182    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:46:31.326977    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:46:31.326990    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:46:33.846054    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:38.848722    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:38.849168    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 16:46:38.888655    4650 logs.go:276] 1 containers: [ac7e11b648c2]
	I0805 16:46:38.888783    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 16:46:38.909922    4650 logs.go:276] 1 containers: [cc6e3b87bfb2]
	I0805 16:46:38.910028    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 16:46:38.925947    4650 logs.go:276] 4 containers: [a99605df4ade 5f76653f2c45 c2937c496a6e f988cb366aa8]
	I0805 16:46:38.926018    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 16:46:38.941508    4650 logs.go:276] 1 containers: [639d9ba7ce0f]
	I0805 16:46:38.941582    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 16:46:38.953259    4650 logs.go:276] 1 containers: [0f914c4d93a3]
	I0805 16:46:38.953328    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 16:46:38.964527    4650 logs.go:276] 1 containers: [7aa2a6a609c4]
	I0805 16:46:38.964593    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 16:46:38.974512    4650 logs.go:276] 0 containers: []
	W0805 16:46:38.974525    4650 logs.go:278] No container was found matching "kindnet"
	I0805 16:46:38.974580    4650 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 16:46:38.985337    4650 logs.go:276] 1 containers: [15de62697ec4]
	I0805 16:46:38.985354    4650 logs.go:123] Gathering logs for kubelet ...
	I0805 16:46:38.985359    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 16:46:39.018199    4650 logs.go:123] Gathering logs for dmesg ...
	I0805 16:46:39.018207    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 16:46:39.022228    4650 logs.go:123] Gathering logs for kube-apiserver [ac7e11b648c2] ...
	I0805 16:46:39.022235    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac7e11b648c2"
	I0805 16:46:39.036945    4650 logs.go:123] Gathering logs for coredns [a99605df4ade] ...
	I0805 16:46:39.036959    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99605df4ade"
	I0805 16:46:39.048585    4650 logs.go:123] Gathering logs for kube-controller-manager [7aa2a6a609c4] ...
	I0805 16:46:39.048594    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa2a6a609c4"
	I0805 16:46:39.070436    4650 logs.go:123] Gathering logs for etcd [cc6e3b87bfb2] ...
	I0805 16:46:39.070447    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6e3b87bfb2"
	I0805 16:46:39.084519    4650 logs.go:123] Gathering logs for coredns [5f76653f2c45] ...
	I0805 16:46:39.084531    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f76653f2c45"
	I0805 16:46:39.096235    4650 logs.go:123] Gathering logs for coredns [c2937c496a6e] ...
	I0805 16:46:39.096248    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2937c496a6e"
	I0805 16:46:39.107868    4650 logs.go:123] Gathering logs for coredns [f988cb366aa8] ...
	I0805 16:46:39.107877    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f988cb366aa8"
	I0805 16:46:39.119958    4650 logs.go:123] Gathering logs for storage-provisioner [15de62697ec4] ...
	I0805 16:46:39.119969    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15de62697ec4"
	I0805 16:46:39.131389    4650 logs.go:123] Gathering logs for Docker ...
	I0805 16:46:39.131404    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 16:46:39.155584    4650 logs.go:123] Gathering logs for describe nodes ...
	I0805 16:46:39.155591    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 16:46:39.190032    4650 logs.go:123] Gathering logs for kube-scheduler [639d9ba7ce0f] ...
	I0805 16:46:39.190044    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639d9ba7ce0f"
	I0805 16:46:39.204998    4650 logs.go:123] Gathering logs for kube-proxy [0f914c4d93a3] ...
	I0805 16:46:39.205012    4650 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f914c4d93a3"
	I0805 16:46:39.216802    4650 logs.go:123] Gathering logs for container status ...
	I0805 16:46:39.216814    4650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 16:46:41.730583    4650 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 16:46:46.731243    4650 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 16:46:46.734733    4650 out.go:177] 
	W0805 16:46:46.738524    4650 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 16:46:46.738531    4650 out.go:239] * 
	* 
	W0805 16:46:46.738998    4650 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:46.754547    4650 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-596000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (577.68s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-620000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-620000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.825096292s)

                                                
                                                
-- stdout --
	* [pause-620000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-620000" primary control-plane node in "pause-620000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-620000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-620000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-620000 -n pause-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-620000 -n pause-620000: exit status 7 (46.696916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-620000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 : exit status 80 (10.022496917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-229000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-229000" primary control-plane node in "NoKubernetes-229000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-229000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-229000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000: exit status 7 (64.775958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 : exit status 80 (5.257351875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-229000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-229000
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000: exit status 7 (56.189542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243641417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-229000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-229000
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000: exit status 7 (58.338792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 : exit status 80 (5.272925209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-229000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-229000
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-229000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-229000 -n NoKubernetes-229000: exit status 7 (47.908625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.805933917s)

                                                
                                                
-- stdout --
	* [auto-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-364000" primary control-plane node in "auto-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:44:59.903590    5045 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:44:59.903725    5045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:44:59.903728    5045 out.go:304] Setting ErrFile to fd 2...
	I0805 16:44:59.903731    5045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:44:59.903872    5045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:44:59.904957    5045 out.go:298] Setting JSON to false
	I0805 16:44:59.921004    5045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4470,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:44:59.921075    5045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:44:59.926237    5045 out.go:177] * [auto-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:44:59.933253    5045 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:44:59.933305    5045 notify.go:220] Checking for updates...
	I0805 16:44:59.940274    5045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:44:59.943202    5045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:44:59.946213    5045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:44:59.949282    5045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:44:59.952166    5045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:44:59.955521    5045 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:44:59.955593    5045 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:44:59.955647    5045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:44:59.959154    5045 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:44:59.966168    5045 start.go:297] selected driver: qemu2
	I0805 16:44:59.966174    5045 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:44:59.966184    5045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:44:59.968391    5045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:44:59.971209    5045 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:44:59.974301    5045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:44:59.974327    5045 cni.go:84] Creating CNI manager for ""
	I0805 16:44:59.974335    5045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:44:59.974340    5045 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:44:59.974380    5045 start.go:340] cluster config:
	{Name:auto-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:44:59.977795    5045 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:44:59.985256    5045 out.go:177] * Starting "auto-364000" primary control-plane node in "auto-364000" cluster
	I0805 16:44:59.988185    5045 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:44:59.988206    5045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:44:59.988214    5045 cache.go:56] Caching tarball of preloaded images
	I0805 16:44:59.988270    5045 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:44:59.988275    5045 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:44:59.988322    5045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/auto-364000/config.json ...
	I0805 16:44:59.988332    5045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/auto-364000/config.json: {Name:mk6e33c987772e57764a29ac2a61e98660d30540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:44:59.988620    5045 start.go:360] acquireMachinesLock for auto-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:44:59.988648    5045 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "auto-364000"
	I0805 16:44:59.988660    5045 start.go:93] Provisioning new machine with config: &{Name:auto-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:44:59.988689    5045 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:44:59.992286    5045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:00.007629    5045 start.go:159] libmachine.API.Create for "auto-364000" (driver="qemu2")
	I0805 16:45:00.007657    5045 client.go:168] LocalClient.Create starting
	I0805 16:45:00.007731    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:00.007765    5045 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:00.007775    5045 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:00.007809    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:00.007835    5045 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:00.007846    5045 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:00.008190    5045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:00.160586    5045 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:00.213862    5045 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:00.213867    5045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:00.214053    5045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:00.223281    5045 main.go:141] libmachine: STDOUT: 
	I0805 16:45:00.223311    5045 main.go:141] libmachine: STDERR: 
	I0805 16:45:00.223358    5045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2 +20000M
	I0805 16:45:00.231472    5045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:00.231487    5045 main.go:141] libmachine: STDERR: 
	I0805 16:45:00.231498    5045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:00.231504    5045 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:00.231518    5045 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:00.231543    5045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b3:09:34:d9:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:00.233208    5045 main.go:141] libmachine: STDOUT: 
	I0805 16:45:00.233225    5045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:00.233245    5045 client.go:171] duration metric: took 225.587542ms to LocalClient.Create
	I0805 16:45:02.235496    5045 start.go:128] duration metric: took 2.246824958s to createHost
	I0805 16:45:02.235576    5045 start.go:83] releasing machines lock for "auto-364000", held for 2.246963084s
	W0805 16:45:02.235636    5045 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:02.247715    5045 out.go:177] * Deleting "auto-364000" in qemu2 ...
	W0805 16:45:02.274777    5045 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:02.274814    5045 start.go:729] Will try again in 5 seconds ...
	I0805 16:45:07.276910    5045 start.go:360] acquireMachinesLock for auto-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:07.277313    5045 start.go:364] duration metric: took 316.417µs to acquireMachinesLock for "auto-364000"
	I0805 16:45:07.277369    5045 start.go:93] Provisioning new machine with config: &{Name:auto-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:07.277632    5045 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:07.287024    5045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:07.332124    5045 start.go:159] libmachine.API.Create for "auto-364000" (driver="qemu2")
	I0805 16:45:07.332180    5045 client.go:168] LocalClient.Create starting
	I0805 16:45:07.332300    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:07.332368    5045 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:07.332384    5045 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:07.332464    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:07.332519    5045 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:07.332529    5045 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:07.333220    5045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:07.494718    5045 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:07.613072    5045 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:07.613079    5045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:07.613298    5045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:07.623353    5045 main.go:141] libmachine: STDOUT: 
	I0805 16:45:07.623379    5045 main.go:141] libmachine: STDERR: 
	I0805 16:45:07.623431    5045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2 +20000M
	I0805 16:45:07.632088    5045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:07.632105    5045 main.go:141] libmachine: STDERR: 
	I0805 16:45:07.632120    5045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:07.632125    5045 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:07.632135    5045 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:07.632171    5045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:6e:da:a4:f3:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/auto-364000/disk.qcow2
	I0805 16:45:07.634038    5045 main.go:141] libmachine: STDOUT: 
	I0805 16:45:07.634050    5045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:07.634064    5045 client.go:171] duration metric: took 301.884291ms to LocalClient.Create
	I0805 16:45:09.636155    5045 start.go:128] duration metric: took 2.3585525s to createHost
	I0805 16:45:09.636217    5045 start.go:83] releasing machines lock for "auto-364000", held for 2.358933458s
	W0805 16:45:09.636428    5045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:09.650861    5045 out.go:177] 
	W0805 16:45:09.653995    5045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:45:09.654053    5045 out.go:239] * 
	* 
	W0805 16:45:09.655646    5045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:45:09.667895    5045 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.985212542s)

                                                
                                                
-- stdout --
	* [kindnet-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-364000" primary control-plane node in "kindnet-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:45:11.814659    5159 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:45:11.814787    5159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:11.814791    5159 out.go:304] Setting ErrFile to fd 2...
	I0805 16:45:11.814793    5159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:11.814915    5159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:45:11.815977    5159 out.go:298] Setting JSON to false
	I0805 16:45:11.832350    5159 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4482,"bootTime":1722897029,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:45:11.832419    5159 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:45:11.836251    5159 out.go:177] * [kindnet-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:45:11.843562    5159 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:45:11.843632    5159 notify.go:220] Checking for updates...
	I0805 16:45:11.849457    5159 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:45:11.852529    5159 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:45:11.854070    5159 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:45:11.857474    5159 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:45:11.864475    5159 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:45:11.866332    5159 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:45:11.866395    5159 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:45:11.866439    5159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:45:11.870536    5159 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:45:11.877346    5159 start.go:297] selected driver: qemu2
	I0805 16:45:11.877352    5159 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:45:11.877360    5159 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:45:11.879531    5159 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:45:11.882528    5159 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:45:11.885588    5159 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:45:11.885639    5159 cni.go:84] Creating CNI manager for "kindnet"
	I0805 16:45:11.885644    5159 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:45:11.885678    5159 start.go:340] cluster config:
	{Name:kindnet-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:45:11.889275    5159 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:45:11.896548    5159 out.go:177] * Starting "kindnet-364000" primary control-plane node in "kindnet-364000" cluster
	I0805 16:45:11.900478    5159 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:45:11.900495    5159 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:45:11.900507    5159 cache.go:56] Caching tarball of preloaded images
	I0805 16:45:11.900600    5159 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:45:11.900605    5159 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:45:11.900664    5159 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kindnet-364000/config.json ...
	I0805 16:45:11.900675    5159 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kindnet-364000/config.json: {Name:mke26ac5dbeb21b6be36470fdd8f56cda502e568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:45:11.901043    5159 start.go:360] acquireMachinesLock for kindnet-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:11.901076    5159 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "kindnet-364000"
	I0805 16:45:11.901086    5159 start.go:93] Provisioning new machine with config: &{Name:kindnet-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:11.901116    5159 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:11.909546    5159 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:11.926572    5159 start.go:159] libmachine.API.Create for "kindnet-364000" (driver="qemu2")
	I0805 16:45:11.926599    5159 client.go:168] LocalClient.Create starting
	I0805 16:45:11.926659    5159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:11.926691    5159 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:11.926703    5159 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:11.926742    5159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:11.926764    5159 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:11.926771    5159 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:11.927279    5159 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:12.117937    5159 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:12.187993    5159 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:12.188000    5159 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:12.188193    5159 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:12.197216    5159 main.go:141] libmachine: STDOUT: 
	I0805 16:45:12.197233    5159 main.go:141] libmachine: STDERR: 
	I0805 16:45:12.197268    5159 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2 +20000M
	I0805 16:45:12.205104    5159 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:12.205117    5159 main.go:141] libmachine: STDERR: 
	I0805 16:45:12.205134    5159 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:12.205140    5159 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:12.205153    5159 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:12.205187    5159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ab:9e:8f:b9:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:12.206942    5159 main.go:141] libmachine: STDOUT: 
	I0805 16:45:12.206959    5159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:12.206978    5159 client.go:171] duration metric: took 280.380667ms to LocalClient.Create
	I0805 16:45:14.209198    5159 start.go:128] duration metric: took 2.308096334s to createHost
	I0805 16:45:14.209298    5159 start.go:83] releasing machines lock for "kindnet-364000", held for 2.308256708s
	W0805 16:45:14.209356    5159 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:14.221682    5159 out.go:177] * Deleting "kindnet-364000" in qemu2 ...
	W0805 16:45:14.250645    5159 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:14.250682    5159 start.go:729] Will try again in 5 seconds ...
	I0805 16:45:19.252851    5159 start.go:360] acquireMachinesLock for kindnet-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:19.253419    5159 start.go:364] duration metric: took 459.75µs to acquireMachinesLock for "kindnet-364000"
	I0805 16:45:19.253554    5159 start.go:93] Provisioning new machine with config: &{Name:kindnet-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:19.253814    5159 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:19.259438    5159 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:19.303759    5159 start.go:159] libmachine.API.Create for "kindnet-364000" (driver="qemu2")
	I0805 16:45:19.303812    5159 client.go:168] LocalClient.Create starting
	I0805 16:45:19.303952    5159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:19.304046    5159 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:19.304069    5159 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:19.304143    5159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:19.304190    5159 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:19.304211    5159 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:19.304821    5159 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:19.466736    5159 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:19.709191    5159 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:19.709203    5159 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:19.709427    5159 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:19.719385    5159 main.go:141] libmachine: STDOUT: 
	I0805 16:45:19.719405    5159 main.go:141] libmachine: STDERR: 
	I0805 16:45:19.719469    5159 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2 +20000M
	I0805 16:45:19.728016    5159 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:19.728031    5159 main.go:141] libmachine: STDERR: 
	I0805 16:45:19.728051    5159 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:19.728055    5159 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:19.728065    5159 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:19.728091    5159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4a:bd:87:82:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kindnet-364000/disk.qcow2
	I0805 16:45:19.729829    5159 main.go:141] libmachine: STDOUT: 
	I0805 16:45:19.729848    5159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:19.729864    5159 client.go:171] duration metric: took 426.052875ms to LocalClient.Create
	I0805 16:45:21.732021    5159 start.go:128] duration metric: took 2.47821625s to createHost
	I0805 16:45:21.732114    5159 start.go:83] releasing machines lock for "kindnet-364000", held for 2.478721s
	W0805 16:45:21.732430    5159 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:21.741921    5159 out.go:177] 
	W0805 16:45:21.748076    5159 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:45:21.748123    5159 out.go:239] * 
	* 
	W0805 16:45:21.750221    5159 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:45:21.758942    5159 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.940135917s)

                                                
                                                
-- stdout --
	* [calico-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-364000" primary control-plane node in "calico-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:45:23.984309    5284 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:45:23.984457    5284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:23.984460    5284 out.go:304] Setting ErrFile to fd 2...
	I0805 16:45:23.984463    5284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:23.984595    5284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:45:23.985714    5284 out.go:298] Setting JSON to false
	I0805 16:45:24.001912    5284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4494,"bootTime":1722897029,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:45:24.001982    5284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:45:24.007708    5284 out.go:177] * [calico-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:45:24.014370    5284 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:45:24.014408    5284 notify.go:220] Checking for updates...
	I0805 16:45:24.021497    5284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:45:24.023007    5284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:45:24.026476    5284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:45:24.029515    5284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:45:24.032497    5284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:45:24.035787    5284 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:45:24.035854    5284 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:45:24.035906    5284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:45:24.040530    5284 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:45:24.047468    5284 start.go:297] selected driver: qemu2
	I0805 16:45:24.047474    5284 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:45:24.047481    5284 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:45:24.049831    5284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:45:24.053505    5284 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:45:24.056560    5284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:45:24.056580    5284 cni.go:84] Creating CNI manager for "calico"
	I0805 16:45:24.056583    5284 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0805 16:45:24.056613    5284 start.go:340] cluster config:
	{Name:calico-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:45:24.060394    5284 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:45:24.068490    5284 out.go:177] * Starting "calico-364000" primary control-plane node in "calico-364000" cluster
	I0805 16:45:24.071366    5284 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:45:24.071381    5284 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:45:24.071389    5284 cache.go:56] Caching tarball of preloaded images
	I0805 16:45:24.071447    5284 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:45:24.071452    5284 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:45:24.071505    5284 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/calico-364000/config.json ...
	I0805 16:45:24.071516    5284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/calico-364000/config.json: {Name:mkff759df23f57686b5885ae5c31408a12fd34a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:45:24.071830    5284 start.go:360] acquireMachinesLock for calico-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:24.071864    5284 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "calico-364000"
	I0805 16:45:24.071873    5284 start.go:93] Provisioning new machine with config: &{Name:calico-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:24.071915    5284 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:24.080362    5284 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:24.095534    5284 start.go:159] libmachine.API.Create for "calico-364000" (driver="qemu2")
	I0805 16:45:24.095559    5284 client.go:168] LocalClient.Create starting
	I0805 16:45:24.095627    5284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:24.095660    5284 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:24.095668    5284 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:24.095705    5284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:24.095732    5284 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:24.095740    5284 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:24.096233    5284 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:24.250463    5284 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:24.469495    5284 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:24.469504    5284 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:24.469734    5284 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:24.479634    5284 main.go:141] libmachine: STDOUT: 
	I0805 16:45:24.479661    5284 main.go:141] libmachine: STDERR: 
	I0805 16:45:24.479719    5284 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2 +20000M
	I0805 16:45:24.487966    5284 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:24.487980    5284 main.go:141] libmachine: STDERR: 
	I0805 16:45:24.488001    5284 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:24.488006    5284 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:24.488021    5284 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:24.488053    5284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a3:32:a4:16:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:24.489754    5284 main.go:141] libmachine: STDOUT: 
	I0805 16:45:24.489771    5284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:24.489791    5284 client.go:171] duration metric: took 394.23325ms to LocalClient.Create
	I0805 16:45:26.492073    5284 start.go:128] duration metric: took 2.420153875s to createHost
	I0805 16:45:26.492170    5284 start.go:83] releasing machines lock for "calico-364000", held for 2.420345334s
	W0805 16:45:26.492217    5284 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:26.503513    5284 out.go:177] * Deleting "calico-364000" in qemu2 ...
	W0805 16:45:26.531260    5284 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:26.531283    5284 start.go:729] Will try again in 5 seconds ...
	I0805 16:45:31.533340    5284 start.go:360] acquireMachinesLock for calico-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:31.533560    5284 start.go:364] duration metric: took 169.625µs to acquireMachinesLock for "calico-364000"
	I0805 16:45:31.533587    5284 start.go:93] Provisioning new machine with config: &{Name:calico-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:31.533738    5284 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:31.546088    5284 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:31.576097    5284 start.go:159] libmachine.API.Create for "calico-364000" (driver="qemu2")
	I0805 16:45:31.576143    5284 client.go:168] LocalClient.Create starting
	I0805 16:45:31.576267    5284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:31.576317    5284 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:31.576330    5284 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:31.576385    5284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:31.576420    5284 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:31.576428    5284 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:31.576892    5284 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:31.735951    5284 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:31.837746    5284 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:31.837760    5284 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:31.837959    5284 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:31.847830    5284 main.go:141] libmachine: STDOUT: 
	I0805 16:45:31.847846    5284 main.go:141] libmachine: STDERR: 
	I0805 16:45:31.847898    5284 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2 +20000M
	I0805 16:45:31.855821    5284 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:31.855833    5284 main.go:141] libmachine: STDERR: 
	I0805 16:45:31.855848    5284 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:31.855852    5284 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:31.855865    5284 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:31.855907    5284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:aa:a8:55:d0:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/calico-364000/disk.qcow2
	I0805 16:45:31.857601    5284 main.go:141] libmachine: STDOUT: 
	I0805 16:45:31.857613    5284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:31.857626    5284 client.go:171] duration metric: took 281.483792ms to LocalClient.Create
	I0805 16:45:33.859520    5284 start.go:128] duration metric: took 2.325812166s to createHost
	I0805 16:45:33.859551    5284 start.go:83] releasing machines lock for "calico-364000", held for 2.326027125s
	W0805 16:45:33.859756    5284 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:33.871007    5284 out.go:177] 
	W0805 16:45:33.876123    5284 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:45:33.876135    5284 out.go:239] * 
	* 
	W0805 16:45:33.877296    5284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:45:33.888059    5284 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.732955583s)

                                                
                                                
-- stdout --
	* [custom-flannel-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-364000" primary control-plane node in "custom-flannel-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:45:36.312048    5409 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:45:36.312188    5409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:36.312192    5409 out.go:304] Setting ErrFile to fd 2...
	I0805 16:45:36.312198    5409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:36.312328    5409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:45:36.313437    5409 out.go:298] Setting JSON to false
	I0805 16:45:36.329703    5409 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4507,"bootTime":1722897029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:45:36.329776    5409 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:45:36.336017    5409 out.go:177] * [custom-flannel-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:45:36.343024    5409 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:45:36.343167    5409 notify.go:220] Checking for updates...
	I0805 16:45:36.350004    5409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:45:36.353058    5409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:45:36.356001    5409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:45:36.358997    5409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:45:36.361983    5409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:45:36.365241    5409 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:45:36.365303    5409 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:45:36.365352    5409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:45:36.368922    5409 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:45:36.374914    5409 start.go:297] selected driver: qemu2
	I0805 16:45:36.374921    5409 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:45:36.374928    5409 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:45:36.377282    5409 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:45:36.379921    5409 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:45:36.383028    5409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:45:36.383065    5409 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0805 16:45:36.383074    5409 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0805 16:45:36.383105    5409 start.go:340] cluster config:
	{Name:custom-flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:45:36.386963    5409 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:45:36.395039    5409 out.go:177] * Starting "custom-flannel-364000" primary control-plane node in "custom-flannel-364000" cluster
	I0805 16:45:36.399015    5409 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:45:36.399034    5409 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:45:36.399044    5409 cache.go:56] Caching tarball of preloaded images
	I0805 16:45:36.399102    5409 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:45:36.399107    5409 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:45:36.399179    5409 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/custom-flannel-364000/config.json ...
	I0805 16:45:36.399189    5409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/custom-flannel-364000/config.json: {Name:mka9a2ad5d1a77ede95d9558305e4ffa795f5d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:45:36.399398    5409 start.go:360] acquireMachinesLock for custom-flannel-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:36.399430    5409 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "custom-flannel-364000"
	I0805 16:45:36.399440    5409 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:36.399463    5409 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:36.407994    5409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:36.423204    5409 start.go:159] libmachine.API.Create for "custom-flannel-364000" (driver="qemu2")
	I0805 16:45:36.423226    5409 client.go:168] LocalClient.Create starting
	I0805 16:45:36.423310    5409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:36.423344    5409 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:36.423352    5409 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:36.423387    5409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:36.423410    5409 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:36.423417    5409 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:36.423850    5409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:36.579388    5409 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:36.630608    5409 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:36.630615    5409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:36.630819    5409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:36.640108    5409 main.go:141] libmachine: STDOUT: 
	I0805 16:45:36.640124    5409 main.go:141] libmachine: STDERR: 
	I0805 16:45:36.640188    5409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2 +20000M
	I0805 16:45:36.648137    5409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:36.648149    5409 main.go:141] libmachine: STDERR: 
	I0805 16:45:36.648163    5409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:36.648166    5409 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:36.648177    5409 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:36.648204    5409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:fe:cf:3b:02:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:36.649796    5409 main.go:141] libmachine: STDOUT: 
	I0805 16:45:36.649813    5409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:36.649831    5409 client.go:171] duration metric: took 226.603542ms to LocalClient.Create
	I0805 16:45:38.651910    5409 start.go:128] duration metric: took 2.252477917s to createHost
	I0805 16:45:38.651960    5409 start.go:83] releasing machines lock for "custom-flannel-364000", held for 2.252569208s
	W0805 16:45:38.651995    5409 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:38.662272    5409 out.go:177] * Deleting "custom-flannel-364000" in qemu2 ...
	W0805 16:45:38.687880    5409 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:38.687896    5409 start.go:729] Will try again in 5 seconds ...
	I0805 16:45:43.689940    5409 start.go:360] acquireMachinesLock for custom-flannel-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:43.690122    5409 start.go:364] duration metric: took 141.5µs to acquireMachinesLock for "custom-flannel-364000"
	I0805 16:45:43.690163    5409 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:43.690242    5409 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:43.699559    5409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:43.717980    5409 start.go:159] libmachine.API.Create for "custom-flannel-364000" (driver="qemu2")
	I0805 16:45:43.718006    5409 client.go:168] LocalClient.Create starting
	I0805 16:45:43.718064    5409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:43.718103    5409 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:43.718113    5409 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:43.718163    5409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:43.718186    5409 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:43.718191    5409 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:43.718481    5409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:43.874669    5409 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:43.956988    5409 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:43.957001    5409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:43.957219    5409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:43.967541    5409 main.go:141] libmachine: STDOUT: 
	I0805 16:45:43.967573    5409 main.go:141] libmachine: STDERR: 
	I0805 16:45:43.967642    5409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2 +20000M
	I0805 16:45:43.977213    5409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:43.977239    5409 main.go:141] libmachine: STDERR: 
	I0805 16:45:43.977263    5409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:43.977270    5409 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:43.977284    5409 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:43.977313    5409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:8a:4d:5a:6f:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/custom-flannel-364000/disk.qcow2
	I0805 16:45:43.979331    5409 main.go:141] libmachine: STDOUT: 
	I0805 16:45:43.979349    5409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:43.979361    5409 client.go:171] duration metric: took 261.356625ms to LocalClient.Create
	I0805 16:45:45.981505    5409 start.go:128] duration metric: took 2.291280375s to createHost
	I0805 16:45:45.981602    5409 start.go:83] releasing machines lock for "custom-flannel-364000", held for 2.291517167s
	W0805 16:45:45.981917    5409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:45.990565    5409 out.go:177] 
	W0805 16:45:45.995536    5409 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:45:45.995552    5409 out.go:239] * 
	* 
	W0805 16:45:45.997558    5409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:45:46.009501    5409 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0805 16:45:49.657230    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.855670625s)

                                                
                                                
-- stdout --
	* [false-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-364000" primary control-plane node in "false-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:45:48.394034    5530 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:45:48.394145    5530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:48.394148    5530 out.go:304] Setting ErrFile to fd 2...
	I0805 16:45:48.394151    5530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:45:48.394263    5530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:45:48.395209    5530 out.go:298] Setting JSON to false
	I0805 16:45:48.411076    5530 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4519,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:45:48.411148    5530 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:45:48.417950    5530 out.go:177] * [false-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:45:48.424778    5530 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:45:48.424825    5530 notify.go:220] Checking for updates...
	I0805 16:45:48.433681    5530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:45:48.441703    5530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:45:48.449695    5530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:45:48.453603    5530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:45:48.456711    5530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:45:48.460029    5530 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:45:48.460103    5530 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:45:48.460154    5530 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:45:48.463562    5530 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:45:48.470766    5530 start.go:297] selected driver: qemu2
	I0805 16:45:48.470773    5530 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:45:48.470779    5530 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:45:48.473129    5530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:45:48.476720    5530 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:45:48.479918    5530 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:45:48.479940    5530 cni.go:84] Creating CNI manager for "false"
	I0805 16:45:48.479967    5530 start.go:340] cluster config:
	{Name:false-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:45:48.483615    5530 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:45:48.492212    5530 out.go:177] * Starting "false-364000" primary control-plane node in "false-364000" cluster
	I0805 16:45:48.496777    5530 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:45:48.496793    5530 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:45:48.496805    5530 cache.go:56] Caching tarball of preloaded images
	I0805 16:45:48.496877    5530 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:45:48.496883    5530 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:45:48.496955    5530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/false-364000/config.json ...
	I0805 16:45:48.496970    5530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/false-364000/config.json: {Name:mk8a3ae099ea59af20ce973527f80e8eae1ad17f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:45:48.497181    5530 start.go:360] acquireMachinesLock for false-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:48.497213    5530 start.go:364] duration metric: took 27µs to acquireMachinesLock for "false-364000"
	I0805 16:45:48.497224    5530 start.go:93] Provisioning new machine with config: &{Name:false-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:48.497252    5530 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:48.505760    5530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:48.523527    5530 start.go:159] libmachine.API.Create for "false-364000" (driver="qemu2")
	I0805 16:45:48.523555    5530 client.go:168] LocalClient.Create starting
	I0805 16:45:48.523617    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:48.523647    5530 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:48.523655    5530 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:48.523706    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:48.523728    5530 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:48.523736    5530 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:48.524170    5530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:48.679589    5530 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:48.776646    5530 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:48.776655    5530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:48.776872    5530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:48.786343    5530 main.go:141] libmachine: STDOUT: 
	I0805 16:45:48.786367    5530 main.go:141] libmachine: STDERR: 
	I0805 16:45:48.786418    5530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2 +20000M
	I0805 16:45:48.794480    5530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:48.794494    5530 main.go:141] libmachine: STDERR: 
	I0805 16:45:48.794507    5530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:48.794512    5530 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:48.794525    5530 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:48.794548    5530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:2b:47:c7:3f:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:48.796141    5530 main.go:141] libmachine: STDOUT: 
	I0805 16:45:48.796155    5530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:48.796174    5530 client.go:171] duration metric: took 272.619583ms to LocalClient.Create
	I0805 16:45:50.798434    5530 start.go:128] duration metric: took 2.30118925s to createHost
	I0805 16:45:50.798575    5530 start.go:83] releasing machines lock for "false-364000", held for 2.301400375s
	W0805 16:45:50.798618    5530 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:50.808367    5530 out.go:177] * Deleting "false-364000" in qemu2 ...
	W0805 16:45:50.837382    5530 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:50.837406    5530 start.go:729] Will try again in 5 seconds ...
	I0805 16:45:55.839458    5530 start.go:360] acquireMachinesLock for false-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:45:55.839679    5530 start.go:364] duration metric: took 185.333µs to acquireMachinesLock for "false-364000"
	I0805 16:45:55.839744    5530 start.go:93] Provisioning new machine with config: &{Name:false-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:45:55.839817    5530 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:45:55.848078    5530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:45:55.863965    5530 start.go:159] libmachine.API.Create for "false-364000" (driver="qemu2")
	I0805 16:45:55.864003    5530 client.go:168] LocalClient.Create starting
	I0805 16:45:55.864071    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:45:55.864108    5530 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:55.864118    5530 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:55.864154    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:45:55.864177    5530 main.go:141] libmachine: Decoding PEM data...
	I0805 16:45:55.864184    5530 main.go:141] libmachine: Parsing certificate...
	I0805 16:45:55.864440    5530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:45:56.018115    5530 main.go:141] libmachine: Creating SSH key...
	I0805 16:45:56.162345    5530 main.go:141] libmachine: Creating Disk image...
	I0805 16:45:56.162357    5530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:45:56.162572    5530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:56.172254    5530 main.go:141] libmachine: STDOUT: 
	I0805 16:45:56.172278    5530 main.go:141] libmachine: STDERR: 
	I0805 16:45:56.172338    5530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2 +20000M
	I0805 16:45:56.180371    5530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:45:56.180388    5530 main.go:141] libmachine: STDERR: 
	I0805 16:45:56.180404    5530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:56.180409    5530 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:45:56.180420    5530 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:45:56.180443    5530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a8:5e:67:b4:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/false-364000/disk.qcow2
	I0805 16:45:56.182045    5530 main.go:141] libmachine: STDOUT: 
	I0805 16:45:56.182063    5530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:45:56.182076    5530 client.go:171] duration metric: took 318.075375ms to LocalClient.Create
	I0805 16:45:58.184218    5530 start.go:128] duration metric: took 2.344419s to createHost
	I0805 16:45:58.184280    5530 start.go:83] releasing machines lock for "false-364000", held for 2.344636583s
	W0805 16:45:58.184761    5530 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:45:58.194308    5530 out.go:177] 
	W0805 16:45:58.199328    5530 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:45:58.199357    5530 out.go:239] * 
	* 
	W0805 16:45:58.201338    5530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:45:58.208309    5530 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0805 16:46:06.584305    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.806078s)

                                                
                                                
-- stdout --
	* [enable-default-cni-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-364000" primary control-plane node in "enable-default-cni-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:46:00.424728    5650 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:46:00.424866    5650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:00.424869    5650 out.go:304] Setting ErrFile to fd 2...
	I0805 16:46:00.424871    5650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:00.425012    5650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:46:00.426114    5650 out.go:298] Setting JSON to false
	I0805 16:46:00.442011    5650 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4531,"bootTime":1722897029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:46:00.442093    5650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:46:00.449539    5650 out.go:177] * [enable-default-cni-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:46:00.456463    5650 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:46:00.456514    5650 notify.go:220] Checking for updates...
	I0805 16:46:00.463407    5650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:46:00.466443    5650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:46:00.469491    5650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:46:00.472410    5650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:46:00.475418    5650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:46:00.478757    5650 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:46:00.478828    5650 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:46:00.478886    5650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:46:00.483465    5650 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:46:00.490454    5650 start.go:297] selected driver: qemu2
	I0805 16:46:00.490460    5650 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:46:00.490466    5650 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:46:00.492800    5650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:46:00.496393    5650 out.go:177] * Automatically selected the socket_vmnet network
	E0805 16:46:00.499527    5650 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0805 16:46:00.499540    5650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:46:00.499559    5650 cni.go:84] Creating CNI manager for "bridge"
	I0805 16:46:00.499563    5650 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:46:00.499595    5650 start.go:340] cluster config:
	{Name:enable-default-cni-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:46:00.503301    5650 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:46:00.508430    5650 out.go:177] * Starting "enable-default-cni-364000" primary control-plane node in "enable-default-cni-364000" cluster
	I0805 16:46:00.512421    5650 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:46:00.512436    5650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:46:00.512443    5650 cache.go:56] Caching tarball of preloaded images
	I0805 16:46:00.512498    5650 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:46:00.512503    5650 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:46:00.512556    5650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/enable-default-cni-364000/config.json ...
	I0805 16:46:00.512567    5650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/enable-default-cni-364000/config.json: {Name:mkffa339d4c3018864c5a5d52336857a5fb7495b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:46:00.512948    5650 start.go:360] acquireMachinesLock for enable-default-cni-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:00.512982    5650 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "enable-default-cni-364000"
	I0805 16:46:00.512993    5650 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:00.513042    5650 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:00.521456    5650 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:00.537499    5650 start.go:159] libmachine.API.Create for "enable-default-cni-364000" (driver="qemu2")
	I0805 16:46:00.537531    5650 client.go:168] LocalClient.Create starting
	I0805 16:46:00.537595    5650 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:00.537624    5650 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:00.537634    5650 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:00.537673    5650 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:00.537701    5650 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:00.537708    5650 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:00.538112    5650 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:00.690855    5650 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:00.785889    5650 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:00.785900    5650 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:00.786107    5650 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:00.795816    5650 main.go:141] libmachine: STDOUT: 
	I0805 16:46:00.795831    5650 main.go:141] libmachine: STDERR: 
	I0805 16:46:00.795897    5650 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2 +20000M
	I0805 16:46:00.803888    5650 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:00.803902    5650 main.go:141] libmachine: STDERR: 
	I0805 16:46:00.803924    5650 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:00.803928    5650 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:00.803942    5650 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:00.803968    5650 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c1:b2:83:ed:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:00.805569    5650 main.go:141] libmachine: STDOUT: 
	I0805 16:46:00.805581    5650 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:00.805598    5650 client.go:171] duration metric: took 268.066291ms to LocalClient.Create
	I0805 16:46:02.807747    5650 start.go:128] duration metric: took 2.294719958s to createHost
	I0805 16:46:02.807817    5650 start.go:83] releasing machines lock for "enable-default-cni-364000", held for 2.2948725s
	W0805 16:46:02.807932    5650 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:02.813843    5650 out.go:177] * Deleting "enable-default-cni-364000" in qemu2 ...
	W0805 16:46:02.838766    5650 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:02.838789    5650 start.go:729] Will try again in 5 seconds ...
	I0805 16:46:07.840784    5650 start.go:360] acquireMachinesLock for enable-default-cni-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:07.840955    5650 start.go:364] duration metric: took 151.542µs to acquireMachinesLock for "enable-default-cni-364000"
	I0805 16:46:07.840982    5650 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:07.841069    5650 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:07.849382    5650 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:07.868992    5650 start.go:159] libmachine.API.Create for "enable-default-cni-364000" (driver="qemu2")
	I0805 16:46:07.869021    5650 client.go:168] LocalClient.Create starting
	I0805 16:46:07.869087    5650 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:07.869135    5650 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:07.869144    5650 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:07.869193    5650 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:07.869219    5650 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:07.869225    5650 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:07.869672    5650 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:08.028350    5650 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:08.142727    5650 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:08.142733    5650 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:08.142938    5650 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:08.152286    5650 main.go:141] libmachine: STDOUT: 
	I0805 16:46:08.152303    5650 main.go:141] libmachine: STDERR: 
	I0805 16:46:08.152361    5650 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2 +20000M
	I0805 16:46:08.160507    5650 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:08.160524    5650 main.go:141] libmachine: STDERR: 
	I0805 16:46:08.160535    5650 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:08.160540    5650 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:08.160551    5650 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:08.160580    5650 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2c:ec:79:de:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/enable-default-cni-364000/disk.qcow2
	I0805 16:46:08.162223    5650 main.go:141] libmachine: STDOUT: 
	I0805 16:46:08.162238    5650 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:08.162250    5650 client.go:171] duration metric: took 293.231334ms to LocalClient.Create
	I0805 16:46:10.164419    5650 start.go:128] duration metric: took 2.323364083s to createHost
	I0805 16:46:10.164493    5650 start.go:83] releasing machines lock for "enable-default-cni-364000", held for 2.323573917s
	W0805 16:46:10.164963    5650 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:10.172686    5650 out.go:177] 
	W0805 16:46:10.178744    5650 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:46:10.178787    5650 out.go:239] * 
	* 
	W0805 16:46:10.181924    5650 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:10.189663    5650 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.858399625s)

                                                
                                                
-- stdout --
	* [flannel-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-364000" primary control-plane node in "flannel-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:46:12.394215    5767 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:46:12.394356    5767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:12.394359    5767 out.go:304] Setting ErrFile to fd 2...
	I0805 16:46:12.394370    5767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:12.394500    5767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:46:12.395567    5767 out.go:298] Setting JSON to false
	I0805 16:46:12.411589    5767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4543,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:46:12.411662    5767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:46:12.418213    5767 out.go:177] * [flannel-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:46:12.425193    5767 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:46:12.425259    5767 notify.go:220] Checking for updates...
	I0805 16:46:12.432074    5767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:46:12.435190    5767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:46:12.438179    5767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:46:12.441157    5767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:46:12.444165    5767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:46:12.447505    5767 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:46:12.447569    5767 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:46:12.447620    5767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:46:12.452167    5767 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:46:12.459174    5767 start.go:297] selected driver: qemu2
	I0805 16:46:12.459179    5767 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:46:12.459184    5767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:46:12.461302    5767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:46:12.464084    5767 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:46:12.467258    5767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:46:12.467309    5767 cni.go:84] Creating CNI manager for "flannel"
	I0805 16:46:12.467313    5767 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0805 16:46:12.467344    5767 start.go:340] cluster config:
	{Name:flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:46:12.470706    5767 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:46:12.478156    5767 out.go:177] * Starting "flannel-364000" primary control-plane node in "flannel-364000" cluster
	I0805 16:46:12.482170    5767 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:46:12.482190    5767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:46:12.482201    5767 cache.go:56] Caching tarball of preloaded images
	I0805 16:46:12.482255    5767 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:46:12.482260    5767 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:46:12.482331    5767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/flannel-364000/config.json ...
	I0805 16:46:12.482342    5767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/flannel-364000/config.json: {Name:mk2d97e978bdf3d565e93e399908943c05b3f477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:46:12.482545    5767 start.go:360] acquireMachinesLock for flannel-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:12.482574    5767 start.go:364] duration metric: took 23.916µs to acquireMachinesLock for "flannel-364000"
	I0805 16:46:12.482584    5767 start.go:93] Provisioning new machine with config: &{Name:flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:12.482614    5767 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:12.491151    5767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:12.507158    5767 start.go:159] libmachine.API.Create for "flannel-364000" (driver="qemu2")
	I0805 16:46:12.507187    5767 client.go:168] LocalClient.Create starting
	I0805 16:46:12.507247    5767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:12.507280    5767 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:12.507290    5767 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:12.507328    5767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:12.507351    5767 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:12.507359    5767 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:12.507769    5767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:12.667113    5767 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:12.745547    5767 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:12.745553    5767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:12.745744    5767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:12.755061    5767 main.go:141] libmachine: STDOUT: 
	I0805 16:46:12.755077    5767 main.go:141] libmachine: STDERR: 
	I0805 16:46:12.755122    5767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2 +20000M
	I0805 16:46:12.763160    5767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:12.763176    5767 main.go:141] libmachine: STDERR: 
	I0805 16:46:12.763196    5767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:12.763200    5767 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:12.763214    5767 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:12.763246    5767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:6a:50:9a:2b:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:12.764869    5767 main.go:141] libmachine: STDOUT: 
	I0805 16:46:12.764887    5767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:12.764913    5767 client.go:171] duration metric: took 257.722ms to LocalClient.Create
	I0805 16:46:14.767180    5767 start.go:128] duration metric: took 2.2845845s to createHost
	I0805 16:46:14.767274    5767 start.go:83] releasing machines lock for "flannel-364000", held for 2.284738958s
	W0805 16:46:14.767313    5767 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:14.779840    5767 out.go:177] * Deleting "flannel-364000" in qemu2 ...
	W0805 16:46:14.809799    5767 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:14.809832    5767 start.go:729] Will try again in 5 seconds ...
	I0805 16:46:19.811971    5767 start.go:360] acquireMachinesLock for flannel-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:19.812547    5767 start.go:364] duration metric: took 480.375µs to acquireMachinesLock for "flannel-364000"
	I0805 16:46:19.812639    5767 start.go:93] Provisioning new machine with config: &{Name:flannel-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:19.812930    5767 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:19.821187    5767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:19.870253    5767 start.go:159] libmachine.API.Create for "flannel-364000" (driver="qemu2")
	I0805 16:46:19.870306    5767 client.go:168] LocalClient.Create starting
	I0805 16:46:19.870470    5767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:19.870561    5767 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:19.870577    5767 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:19.870662    5767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:19.870707    5767 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:19.870720    5767 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:19.871282    5767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:20.035057    5767 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:20.159732    5767 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:20.159739    5767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:20.159947    5767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:20.169495    5767 main.go:141] libmachine: STDOUT: 
	I0805 16:46:20.169589    5767 main.go:141] libmachine: STDERR: 
	I0805 16:46:20.169636    5767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2 +20000M
	I0805 16:46:20.177747    5767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:20.177814    5767 main.go:141] libmachine: STDERR: 
	I0805 16:46:20.177824    5767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:20.177829    5767 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:20.177841    5767 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:20.177870    5767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:b1:e6:34:1e:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/flannel-364000/disk.qcow2
	I0805 16:46:20.179546    5767 main.go:141] libmachine: STDOUT: 
	I0805 16:46:20.179602    5767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:20.179614    5767 client.go:171] duration metric: took 309.308333ms to LocalClient.Create
	I0805 16:46:22.181833    5767 start.go:128] duration metric: took 2.368835958s to createHost
	I0805 16:46:22.181917    5767 start.go:83] releasing machines lock for "flannel-364000", held for 2.369390667s
	W0805 16:46:22.182228    5767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:22.192768    5767 out.go:177] 
	W0805 16:46:22.199739    5767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:46:22.199764    5767 out.go:239] * 
	* 
	W0805 16:46:22.202203    5767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:22.210682    5767 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.893610625s)

                                                
                                                
-- stdout --
	* [bridge-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-364000" primary control-plane node in "bridge-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:46:24.632367    5888 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:46:24.632521    5888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:24.632524    5888 out.go:304] Setting ErrFile to fd 2...
	I0805 16:46:24.632526    5888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:24.632660    5888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:46:24.633755    5888 out.go:298] Setting JSON to false
	I0805 16:46:24.650287    5888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4555,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:46:24.650353    5888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:46:24.656557    5888 out.go:177] * [bridge-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:46:24.663502    5888 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:46:24.663546    5888 notify.go:220] Checking for updates...
	I0805 16:46:24.671452    5888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:46:24.674518    5888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:46:24.677513    5888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:46:24.680498    5888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:46:24.683470    5888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:46:24.686821    5888 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:46:24.686894    5888 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:46:24.686942    5888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:46:24.690459    5888 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:46:24.697507    5888 start.go:297] selected driver: qemu2
	I0805 16:46:24.697515    5888 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:46:24.697522    5888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:46:24.699963    5888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:46:24.703417    5888 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:46:24.706530    5888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:46:24.706552    5888 cni.go:84] Creating CNI manager for "bridge"
	I0805 16:46:24.706556    5888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:46:24.706602    5888 start.go:340] cluster config:
	{Name:bridge-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:46:24.710521    5888 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:46:24.717469    5888 out.go:177] * Starting "bridge-364000" primary control-plane node in "bridge-364000" cluster
	I0805 16:46:24.721451    5888 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:46:24.721465    5888 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:46:24.721474    5888 cache.go:56] Caching tarball of preloaded images
	I0805 16:46:24.721540    5888 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:46:24.721546    5888 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:46:24.721598    5888 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/bridge-364000/config.json ...
	I0805 16:46:24.721610    5888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/bridge-364000/config.json: {Name:mk1bc1960d940ec5fad3bb5373680709b369bac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:46:24.721903    5888 start.go:360] acquireMachinesLock for bridge-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:24.721939    5888 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "bridge-364000"
	I0805 16:46:24.721949    5888 start.go:93] Provisioning new machine with config: &{Name:bridge-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:24.721979    5888 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:24.730437    5888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:24.746522    5888 start.go:159] libmachine.API.Create for "bridge-364000" (driver="qemu2")
	I0805 16:46:24.746553    5888 client.go:168] LocalClient.Create starting
	I0805 16:46:24.746633    5888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:24.746671    5888 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:24.746679    5888 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:24.746719    5888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:24.746749    5888 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:24.746757    5888 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:24.747174    5888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:24.906712    5888 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:25.068890    5888 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:25.068901    5888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:25.069119    5888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:25.078510    5888 main.go:141] libmachine: STDOUT: 
	I0805 16:46:25.078527    5888 main.go:141] libmachine: STDERR: 
	I0805 16:46:25.078573    5888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2 +20000M
	I0805 16:46:25.086642    5888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:25.086657    5888 main.go:141] libmachine: STDERR: 
	I0805 16:46:25.086672    5888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:25.086675    5888 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:25.086686    5888 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:25.086713    5888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:cc:0d:e8:6c:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:25.088485    5888 main.go:141] libmachine: STDOUT: 
	I0805 16:46:25.088496    5888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:25.088515    5888 client.go:171] duration metric: took 341.963375ms to LocalClient.Create
	I0805 16:46:27.090759    5888 start.go:128] duration metric: took 2.368794291s to createHost
	I0805 16:46:27.090850    5888 start.go:83] releasing machines lock for "bridge-364000", held for 2.368949042s
	W0805 16:46:27.090911    5888 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:27.102852    5888 out.go:177] * Deleting "bridge-364000" in qemu2 ...
	W0805 16:46:27.132629    5888 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:27.132724    5888 start.go:729] Will try again in 5 seconds ...
	I0805 16:46:32.134833    5888 start.go:360] acquireMachinesLock for bridge-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:32.135511    5888 start.go:364] duration metric: took 558.083µs to acquireMachinesLock for "bridge-364000"
	I0805 16:46:32.135616    5888 start.go:93] Provisioning new machine with config: &{Name:bridge-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:32.135846    5888 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:32.144215    5888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:32.196287    5888 start.go:159] libmachine.API.Create for "bridge-364000" (driver="qemu2")
	I0805 16:46:32.196337    5888 client.go:168] LocalClient.Create starting
	I0805 16:46:32.196472    5888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:32.196542    5888 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:32.196557    5888 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:32.196613    5888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:32.196656    5888 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:32.196674    5888 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:32.197239    5888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:32.362324    5888 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:32.434563    5888 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:32.434573    5888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:32.434766    5888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:32.444655    5888 main.go:141] libmachine: STDOUT: 
	I0805 16:46:32.444676    5888 main.go:141] libmachine: STDERR: 
	I0805 16:46:32.444741    5888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2 +20000M
	I0805 16:46:32.452925    5888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:32.452944    5888 main.go:141] libmachine: STDERR: 
	I0805 16:46:32.452956    5888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:32.452963    5888 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:32.452970    5888 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:32.453005    5888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:92:c5:bd:3f:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/bridge-364000/disk.qcow2
	I0805 16:46:32.454779    5888 main.go:141] libmachine: STDOUT: 
	I0805 16:46:32.454797    5888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:32.454810    5888 client.go:171] duration metric: took 258.472666ms to LocalClient.Create
	I0805 16:46:34.457171    5888 start.go:128] duration metric: took 2.321308084s to createHost
	I0805 16:46:34.457292    5888 start.go:83] releasing machines lock for "bridge-364000", held for 2.321779583s
	W0805 16:46:34.457671    5888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:34.466305    5888 out.go:177] 
	W0805 16:46:34.473362    5888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:46:34.473407    5888 out.go:239] * 
	* 
	W0805 16:46:34.475731    5888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:34.484325    5888 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-364000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.839829459s)

                                                
                                                
-- stdout --
	* [kubenet-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-364000" primary control-plane node in "kubenet-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:46:36.661707    5997 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:46:36.661829    5997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:36.661832    5997 out.go:304] Setting ErrFile to fd 2...
	I0805 16:46:36.661834    5997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:36.662006    5997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:46:36.663136    5997 out.go:298] Setting JSON to false
	I0805 16:46:36.679308    5997 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4567,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:46:36.679403    5997 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:46:36.685461    5997 out.go:177] * [kubenet-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:46:36.692433    5997 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:46:36.692451    5997 notify.go:220] Checking for updates...
	I0805 16:46:36.699448    5997 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:46:36.702380    5997 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:46:36.705420    5997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:46:36.708471    5997 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:46:36.711435    5997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:46:36.714729    5997 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:46:36.714792    5997 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:46:36.714843    5997 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:46:36.719399    5997 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:46:36.726415    5997 start.go:297] selected driver: qemu2
	I0805 16:46:36.726421    5997 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:46:36.726428    5997 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:46:36.728745    5997 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:46:36.732454    5997 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:46:36.735442    5997 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:46:36.735459    5997 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0805 16:46:36.735498    5997 start.go:340] cluster config:
	{Name:kubenet-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:46:36.739008    5997 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:46:36.746414    5997 out.go:177] * Starting "kubenet-364000" primary control-plane node in "kubenet-364000" cluster
	I0805 16:46:36.750359    5997 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:46:36.750373    5997 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:46:36.750378    5997 cache.go:56] Caching tarball of preloaded images
	I0805 16:46:36.750428    5997 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:46:36.750433    5997 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:46:36.750494    5997 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kubenet-364000/config.json ...
	I0805 16:46:36.750504    5997 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/kubenet-364000/config.json: {Name:mkd9fc440a662171aa16ac567e2addf2961edd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:46:36.750708    5997 start.go:360] acquireMachinesLock for kubenet-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:36.750738    5997 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "kubenet-364000"
	I0805 16:46:36.750748    5997 start.go:93] Provisioning new machine with config: &{Name:kubenet-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:36.750781    5997 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:36.759377    5997 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:36.774480    5997 start.go:159] libmachine.API.Create for "kubenet-364000" (driver="qemu2")
	I0805 16:46:36.774511    5997 client.go:168] LocalClient.Create starting
	I0805 16:46:36.774571    5997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:36.774600    5997 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:36.774610    5997 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:36.774648    5997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:36.774671    5997 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:36.774681    5997 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:36.775014    5997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:36.929679    5997 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:37.054393    5997 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:37.054404    5997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:37.054816    5997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:37.064394    5997 main.go:141] libmachine: STDOUT: 
	I0805 16:46:37.064411    5997 main.go:141] libmachine: STDERR: 
	I0805 16:46:37.064468    5997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2 +20000M
	I0805 16:46:37.072697    5997 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:37.072712    5997 main.go:141] libmachine: STDERR: 
	I0805 16:46:37.072727    5997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:37.072732    5997 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:37.072745    5997 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:37.072771    5997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:6c:1c:0e:2f:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:37.074382    5997 main.go:141] libmachine: STDOUT: 
	I0805 16:46:37.074396    5997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:37.074413    5997 client.go:171] duration metric: took 299.904167ms to LocalClient.Create
	I0805 16:46:39.076496    5997 start.go:128] duration metric: took 2.325746417s to createHost
	I0805 16:46:39.076520    5997 start.go:83] releasing machines lock for "kubenet-364000", held for 2.325824333s
	W0805 16:46:39.076535    5997 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:39.084980    5997 out.go:177] * Deleting "kubenet-364000" in qemu2 ...
	W0805 16:46:39.096298    5997 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:39.096305    5997 start.go:729] Will try again in 5 seconds ...
	I0805 16:46:44.098343    5997 start.go:360] acquireMachinesLock for kubenet-364000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:44.098513    5997 start.go:364] duration metric: took 134.875µs to acquireMachinesLock for "kubenet-364000"
	I0805 16:46:44.098556    5997 start.go:93] Provisioning new machine with config: &{Name:kubenet-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:44.098647    5997 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:44.109917    5997 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 16:46:44.126696    5997 start.go:159] libmachine.API.Create for "kubenet-364000" (driver="qemu2")
	I0805 16:46:44.126723    5997 client.go:168] LocalClient.Create starting
	I0805 16:46:44.126792    5997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:44.126830    5997 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:44.126839    5997 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:44.126877    5997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:44.126900    5997 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:44.126905    5997 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:44.127427    5997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:44.279960    5997 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:44.408320    5997 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:44.408328    5997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:44.408552    5997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:44.418389    5997 main.go:141] libmachine: STDOUT: 
	I0805 16:46:44.418408    5997 main.go:141] libmachine: STDERR: 
	I0805 16:46:44.418454    5997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2 +20000M
	I0805 16:46:44.426897    5997 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:44.426916    5997 main.go:141] libmachine: STDERR: 
	I0805 16:46:44.426934    5997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:44.426947    5997 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:44.426959    5997 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:44.426993    5997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a1:b8:39:cc:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/kubenet-364000/disk.qcow2
	I0805 16:46:44.428600    5997 main.go:141] libmachine: STDOUT: 
	I0805 16:46:44.428615    5997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:44.428627    5997 client.go:171] duration metric: took 301.9065ms to LocalClient.Create
	I0805 16:46:46.430802    5997 start.go:128] duration metric: took 2.332172s to createHost
	I0805 16:46:46.430884    5997 start.go:83] releasing machines lock for "kubenet-364000", held for 2.33240675s
	W0805 16:46:46.431336    5997 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:46.441117    5997 out.go:177] 
	W0805 16:46:46.448261    5997 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:46:46.448319    5997 out.go:239] * 
	* 
	W0805 16:46:46.450931    5997 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:46.459179    5997 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.81817575s)

                                                
                                                
-- stdout --
	* [old-k8s-version-238000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-238000" primary control-plane node in "old-k8s-version-238000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-238000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:46:48.768149    6115 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:46:48.768267    6115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:48.768271    6115 out.go:304] Setting ErrFile to fd 2...
	I0805 16:46:48.768273    6115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:46:48.768391    6115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:46:48.769503    6115 out.go:298] Setting JSON to false
	I0805 16:46:48.786102    6115 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4579,"bootTime":1722897029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:46:48.786173    6115 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:46:48.792330    6115 out.go:177] * [old-k8s-version-238000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:46:48.799226    6115 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:46:48.799272    6115 notify.go:220] Checking for updates...
	I0805 16:46:48.806253    6115 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:46:48.809253    6115 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:46:48.812258    6115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:46:48.815274    6115 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:46:48.818253    6115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:46:48.821683    6115 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:46:48.821754    6115 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:46:48.821804    6115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:46:48.826266    6115 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:46:48.833252    6115 start.go:297] selected driver: qemu2
	I0805 16:46:48.833259    6115 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:46:48.833265    6115 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:46:48.835433    6115 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:46:48.838212    6115 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:46:48.841294    6115 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:46:48.841324    6115 cni.go:84] Creating CNI manager for ""
	I0805 16:46:48.841331    6115 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 16:46:48.841361    6115 start.go:340] cluster config:
	{Name:old-k8s-version-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:46:48.845108    6115 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:46:48.852270    6115 out.go:177] * Starting "old-k8s-version-238000" primary control-plane node in "old-k8s-version-238000" cluster
	I0805 16:46:48.856223    6115 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 16:46:48.856243    6115 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 16:46:48.856254    6115 cache.go:56] Caching tarball of preloaded images
	I0805 16:46:48.856309    6115 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:46:48.856315    6115 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 16:46:48.856401    6115 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/old-k8s-version-238000/config.json ...
	I0805 16:46:48.856417    6115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/old-k8s-version-238000/config.json: {Name:mk5c0bd5c559bb4c8108392c1aa4c8d7b37c85b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:46:48.856804    6115 start.go:360] acquireMachinesLock for old-k8s-version-238000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:48.856838    6115 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "old-k8s-version-238000"
	I0805 16:46:48.856847    6115 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:48.856894    6115 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:48.865185    6115 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:46:48.882349    6115 start.go:159] libmachine.API.Create for "old-k8s-version-238000" (driver="qemu2")
	I0805 16:46:48.882381    6115 client.go:168] LocalClient.Create starting
	I0805 16:46:48.882441    6115 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:48.882472    6115 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:48.882480    6115 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:48.882522    6115 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:48.882544    6115 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:48.882549    6115 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:48.882963    6115 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:49.041267    6115 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:49.103930    6115 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:49.103939    6115 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:49.104179    6115 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:49.114004    6115 main.go:141] libmachine: STDOUT: 
	I0805 16:46:49.114021    6115 main.go:141] libmachine: STDERR: 
	I0805 16:46:49.114080    6115 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2 +20000M
	I0805 16:46:49.122490    6115 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:49.122505    6115 main.go:141] libmachine: STDERR: 
	I0805 16:46:49.122520    6115 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:49.122527    6115 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:49.122540    6115 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:49.122564    6115 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f7:02:b2:bd:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:49.124270    6115 main.go:141] libmachine: STDOUT: 
	I0805 16:46:49.124283    6115 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:49.124300    6115 client.go:171] duration metric: took 241.918541ms to LocalClient.Create
	I0805 16:46:51.126627    6115 start.go:128] duration metric: took 2.269723791s to createHost
	I0805 16:46:51.126742    6115 start.go:83] releasing machines lock for "old-k8s-version-238000", held for 2.269939667s
	W0805 16:46:51.126797    6115 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:51.140994    6115 out.go:177] * Deleting "old-k8s-version-238000" in qemu2 ...
	W0805 16:46:51.167470    6115 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:51.167496    6115 start.go:729] Will try again in 5 seconds ...
	I0805 16:46:56.169638    6115 start.go:360] acquireMachinesLock for old-k8s-version-238000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:46:56.170237    6115 start.go:364] duration metric: took 478.292µs to acquireMachinesLock for "old-k8s-version-238000"
	I0805 16:46:56.170382    6115 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:46:56.170696    6115 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:46:56.176379    6115 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:46:56.222470    6115 start.go:159] libmachine.API.Create for "old-k8s-version-238000" (driver="qemu2")
	I0805 16:46:56.222523    6115 client.go:168] LocalClient.Create starting
	I0805 16:46:56.222637    6115 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:46:56.222705    6115 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:56.222721    6115 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:56.222781    6115 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:46:56.222837    6115 main.go:141] libmachine: Decoding PEM data...
	I0805 16:46:56.222848    6115 main.go:141] libmachine: Parsing certificate...
	I0805 16:46:56.223586    6115 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:46:56.384113    6115 main.go:141] libmachine: Creating SSH key...
	I0805 16:46:56.502720    6115 main.go:141] libmachine: Creating Disk image...
	I0805 16:46:56.502727    6115 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:46:56.502939    6115 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:56.512395    6115 main.go:141] libmachine: STDOUT: 
	I0805 16:46:56.512415    6115 main.go:141] libmachine: STDERR: 
	I0805 16:46:56.512483    6115 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2 +20000M
	I0805 16:46:56.520689    6115 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:46:56.520703    6115 main.go:141] libmachine: STDERR: 
	I0805 16:46:56.520714    6115 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:56.520717    6115 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:46:56.520729    6115 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:46:56.520761    6115 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:81:32:c8:07:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:46:56.522417    6115 main.go:141] libmachine: STDOUT: 
	I0805 16:46:56.522433    6115 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:46:56.522446    6115 client.go:171] duration metric: took 299.923792ms to LocalClient.Create
	I0805 16:46:58.524478    6115 start.go:128] duration metric: took 2.353811417s to createHost
	I0805 16:46:58.524509    6115 start.go:83] releasing machines lock for "old-k8s-version-238000", held for 2.354293458s
	W0805 16:46:58.524605    6115 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:46:58.532770    6115 out.go:177] 
	W0805 16:46:58.536859    6115 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:46:58.536866    6115 out.go:239] * 
	* 
	W0805 16:46:58.537520    6115 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:46:58.547837    6115 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (36.731125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-238000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-238000 create -f testdata/busybox.yaml: exit status 1 (27.246834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-238000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (30.41175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (28.833875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-238000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-238000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-238000 describe deploy/metrics-server -n kube-system: exit status 1 (27.774833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-238000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (29.277083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184004416s)

                                                
                                                
-- stdout --
	* [old-k8s-version-238000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-238000" primary control-plane node in "old-k8s-version-238000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:02.548064    6181 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:02.548197    6181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:02.548200    6181 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:02.548203    6181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:02.548336    6181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:02.549433    6181 out.go:298] Setting JSON to false
	I0805 16:47:02.566007    6181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4593,"bootTime":1722897029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:02.566075    6181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:02.571120    6181 out.go:177] * [old-k8s-version-238000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:02.578289    6181 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:02.578312    6181 notify.go:220] Checking for updates...
	I0805 16:47:02.584203    6181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:02.587239    6181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:02.588594    6181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:02.591188    6181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:02.594243    6181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:02.597582    6181 config.go:182] Loaded profile config "old-k8s-version-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 16:47:02.601209    6181 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 16:47:02.604213    6181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:02.608253    6181 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:47:02.615178    6181 start.go:297] selected driver: qemu2
	I0805 16:47:02.615184    6181 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:02.615228    6181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:02.617717    6181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:02.617755    6181 cni.go:84] Creating CNI manager for ""
	I0805 16:47:02.617763    6181 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 16:47:02.617783    6181 start.go:340] cluster config:
	{Name:old-k8s-version-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-238000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:02.621362    6181 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:02.628185    6181 out.go:177] * Starting "old-k8s-version-238000" primary control-plane node in "old-k8s-version-238000" cluster
	I0805 16:47:02.632204    6181 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 16:47:02.632216    6181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 16:47:02.632225    6181 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:02.632277    6181 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:02.632282    6181 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 16:47:02.632327    6181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/old-k8s-version-238000/config.json ...
	I0805 16:47:02.632808    6181 start.go:360] acquireMachinesLock for old-k8s-version-238000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:02.632841    6181 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "old-k8s-version-238000"
	I0805 16:47:02.632849    6181 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:02.632857    6181 fix.go:54] fixHost starting: 
	I0805 16:47:02.632972    6181 fix.go:112] recreateIfNeeded on old-k8s-version-238000: state=Stopped err=<nil>
	W0805 16:47:02.632980    6181 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:02.637169    6181 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-238000" ...
	I0805 16:47:02.645154    6181 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:02.645186    6181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:81:32:c8:07:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:47:02.647155    6181 main.go:141] libmachine: STDOUT: 
	I0805 16:47:02.647171    6181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:02.647203    6181 fix.go:56] duration metric: took 14.347125ms for fixHost
	I0805 16:47:02.647208    6181 start.go:83] releasing machines lock for "old-k8s-version-238000", held for 14.362667ms
	W0805 16:47:02.647213    6181 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:02.647240    6181 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:02.647244    6181 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:07.649377    6181 start.go:360] acquireMachinesLock for old-k8s-version-238000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:07.649786    6181 start.go:364] duration metric: took 309.917µs to acquireMachinesLock for "old-k8s-version-238000"
	I0805 16:47:07.649917    6181 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:07.649930    6181 fix.go:54] fixHost starting: 
	I0805 16:47:07.650493    6181 fix.go:112] recreateIfNeeded on old-k8s-version-238000: state=Stopped err=<nil>
	W0805 16:47:07.650511    6181 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:07.661685    6181 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-238000" ...
	I0805 16:47:07.664641    6181 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:07.664778    6181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:81:32:c8:07:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/old-k8s-version-238000/disk.qcow2
	I0805 16:47:07.671060    6181 main.go:141] libmachine: STDOUT: 
	I0805 16:47:07.671096    6181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:07.671141    6181 fix.go:56] duration metric: took 21.212292ms for fixHost
	I0805 16:47:07.671156    6181 start.go:83] releasing machines lock for "old-k8s-version-238000", held for 21.355042ms
	W0805 16:47:07.671253    6181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:07.678649    6181 out.go:177] 
	W0805 16:47:07.682690    6181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:07.682707    6181 out.go:239] * 
	* 
	W0805 16:47:07.684033    6181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:07.694675    6181 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-238000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (52.987625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-238000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (31.164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-238000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.942708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (29.372125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-238000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (27.729542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-238000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-238000 --alsologtostderr -v=1: exit status 83 (37.095208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-238000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:07.939347    6207 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:07.940200    6207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:07.940203    6207 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:07.940205    6207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:07.940338    6207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:07.940540    6207 out.go:298] Setting JSON to false
	I0805 16:47:07.940547    6207 mustload.go:65] Loading cluster: old-k8s-version-238000
	I0805 16:47:07.940732    6207 config.go:182] Loaded profile config "old-k8s-version-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 16:47:07.943817    6207 out.go:177] * The control-plane node old-k8s-version-238000 host is not running: state=Stopped
	I0805 16:47:07.944938    6207 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-238000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-238000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (29.025459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (28.026375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.895782375s)

                                                
                                                
-- stdout --
	* [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:08.248557    6224 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:08.248682    6224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:08.248685    6224 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:08.248688    6224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:08.248826    6224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:08.249959    6224 out.go:298] Setting JSON to false
	I0805 16:47:08.266589    6224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4599,"bootTime":1722897029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:08.266661    6224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:08.271204    6224 out.go:177] * [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:08.278165    6224 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:08.278275    6224 notify.go:220] Checking for updates...
	I0805 16:47:08.285193    6224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:08.288187    6224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:08.291143    6224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:08.294272    6224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:08.297138    6224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:08.300457    6224 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:08.300520    6224 config.go:182] Loaded profile config "stopped-upgrade-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 16:47:08.300566    6224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:08.305157    6224 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:47:08.312169    6224 start.go:297] selected driver: qemu2
	I0805 16:47:08.312175    6224 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:47:08.312180    6224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:08.314385    6224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:47:08.317147    6224 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:47:08.318739    6224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:08.318764    6224 cni.go:84] Creating CNI manager for ""
	I0805 16:47:08.318772    6224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:08.318783    6224 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:47:08.318808    6224 start.go:340] cluster config:
	{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:08.322569    6224 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.330155    6224 out.go:177] * Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	I0805 16:47:08.334148    6224 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 16:47:08.334224    6224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/no-preload-265000/config.json ...
	I0805 16:47:08.334243    6224 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/no-preload-265000/config.json: {Name:mk5e1bf6b2776d30a61ae914984ffc7b2ef74cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:47:08.334266    6224 cache.go:107] acquiring lock: {Name:mkdb304c7bbd79570fe8e51264f4688630824a9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334272    6224 cache.go:107] acquiring lock: {Name:mkc83aeebe6b3487ea0a0222042b3a14ab188be6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334289    6224 cache.go:107] acquiring lock: {Name:mkb3d37007b4d9b676c2b96490b474b2202f547a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334336    6224 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 16:47:08.334342    6224 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.792µs
	I0805 16:47:08.334349    6224 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 16:47:08.334267    6224 cache.go:107] acquiring lock: {Name:mk73a4cf503166775a3cd6f2f7e1e12a189b6a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334427    6224 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0805 16:47:08.334431    6224 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 16:47:08.334462    6224 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 16:47:08.334469    6224 cache.go:107] acquiring lock: {Name:mk221df91522bfad87b4af8e6d4238022949f1ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334509    6224 cache.go:107] acquiring lock: {Name:mkf7e8a98e2b8ae0466bc13132e3150172af1e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334524    6224 cache.go:107] acquiring lock: {Name:mk64803bbe3efb913b30bac942612863fb1b3464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334430    6224 cache.go:107] acquiring lock: {Name:mk9dbc1c8335885d6f613fd756e258ab697de504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:08.334642    6224 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 16:47:08.334644    6224 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:08.334654    6224 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 16:47:08.334679    6224 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "no-preload-265000"
	I0805 16:47:08.334691    6224 start.go:93] Provisioning new machine with config: &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:08.334736    6224 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:08.334794    6224 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0805 16:47:08.334829    6224 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0805 16:47:08.342141    6224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:08.345235    6224 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 16:47:08.345503    6224 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 16:47:08.346015    6224 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 16:47:08.346084    6224 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 16:47:08.346100    6224 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0805 16:47:08.346162    6224 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0805 16:47:08.347415    6224 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0805 16:47:08.358223    6224 start.go:159] libmachine.API.Create for "no-preload-265000" (driver="qemu2")
	I0805 16:47:08.358243    6224 client.go:168] LocalClient.Create starting
	I0805 16:47:08.358331    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:08.358361    6224 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:08.358373    6224 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:08.358416    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:08.358438    6224 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:08.358450    6224 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:08.358825    6224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:08.514774    6224 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:08.582054    6224 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:08.582084    6224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:08.582297    6224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:08.592349    6224 main.go:141] libmachine: STDOUT: 
	I0805 16:47:08.592372    6224 main.go:141] libmachine: STDERR: 
	I0805 16:47:08.592418    6224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2 +20000M
	I0805 16:47:08.601523    6224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:08.601542    6224 main.go:141] libmachine: STDERR: 
	I0805 16:47:08.601567    6224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:08.601571    6224 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:08.601585    6224 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:08.601613    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:17:ac:fe:ba:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:08.603553    6224 main.go:141] libmachine: STDOUT: 
	I0805 16:47:08.603574    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:08.603593    6224 client.go:171] duration metric: took 245.35ms to LocalClient.Create
	I0805 16:47:08.740397    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 16:47:08.748928    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 16:47:08.751093    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 16:47:08.764726    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 16:47:08.800709    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0805 16:47:08.804042    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0805 16:47:08.841072    6224 cache.go:162] opening:  /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0805 16:47:08.945135    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0805 16:47:08.945154    6224 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 610.708333ms
	I0805 16:47:08.945169    6224 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0805 16:47:10.603719    6224 start.go:128] duration metric: took 2.2690175s to createHost
	I0805 16:47:10.603739    6224 start.go:83] releasing machines lock for "no-preload-265000", held for 2.269099417s
	W0805 16:47:10.603765    6224 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:10.609679    6224 out.go:177] * Deleting "no-preload-265000" in qemu2 ...
	W0805 16:47:10.632325    6224 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:10.632340    6224 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:10.855309    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0805 16:47:10.855329    6224 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.520971375s
	I0805 16:47:10.855349    6224 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0805 16:47:12.196964    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 16:47:12.196987    6224 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 3.86259725s
	I0805 16:47:12.196999    6224 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 16:47:12.732833    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 16:47:12.732871    6224 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 4.398699584s
	I0805 16:47:12.732887    6224 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 16:47:12.929824    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 16:47:12.929850    6224 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.5954855s
	I0805 16:47:12.929861    6224 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 16:47:13.626507    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 16:47:13.626537    6224 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 5.292384292s
	I0805 16:47:13.626595    6224 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 16:47:14.950465    6224 cache.go:157] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0805 16:47:14.950486    6224 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 6.616329125s
	I0805 16:47:14.950497    6224 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0805 16:47:14.950512    6224 cache.go:87] Successfully saved all images to host disk.
	I0805 16:47:15.634421    6224 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:15.634877    6224 start.go:364] duration metric: took 374.209µs to acquireMachinesLock for "no-preload-265000"
	I0805 16:47:15.634987    6224 start.go:93] Provisioning new machine with config: &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:15.635196    6224 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:15.645757    6224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:15.684589    6224 start.go:159] libmachine.API.Create for "no-preload-265000" (driver="qemu2")
	I0805 16:47:15.684637    6224 client.go:168] LocalClient.Create starting
	I0805 16:47:15.684743    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:15.684805    6224 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:15.684825    6224 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:15.684903    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:15.684944    6224 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:15.684958    6224 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:15.685488    6224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:15.845789    6224 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:16.052988    6224 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:16.052999    6224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:16.053237    6224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:16.062948    6224 main.go:141] libmachine: STDOUT: 
	I0805 16:47:16.062969    6224 main.go:141] libmachine: STDERR: 
	I0805 16:47:16.063034    6224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2 +20000M
	I0805 16:47:16.071008    6224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:16.071034    6224 main.go:141] libmachine: STDERR: 
	I0805 16:47:16.071048    6224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:16.071054    6224 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:16.071065    6224 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:16.071108    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:8a:c8:6c:9f:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:16.072857    6224 main.go:141] libmachine: STDOUT: 
	I0805 16:47:16.072876    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:16.072890    6224 client.go:171] duration metric: took 388.256041ms to LocalClient.Create
	I0805 16:47:18.075104    6224 start.go:128] duration metric: took 2.439916583s to createHost
	I0805 16:47:18.075179    6224 start.go:83] releasing machines lock for "no-preload-265000", held for 2.44032825s
	W0805 16:47:18.075684    6224 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:18.085072    6224 out.go:177] 
	W0805 16:47:18.093345    6224 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:18.093378    6224 out.go:239] * 
	* 
	W0805 16:47:18.095863    6224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:18.103240    6224 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (63.03825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-265000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-265000 create -f testdata/busybox.yaml: exit status 1 (31.808542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-265000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.147708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.8115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-265000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system: exit status 1 (27.95125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (28.928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.926008958s)

                                                
                                                
-- stdout --
	* [embed-certs-842000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-842000" primary control-plane node in "embed-certs-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:19.782181    6295 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:19.782309    6295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:19.782312    6295 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:19.782315    6295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:19.782461    6295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:19.783569    6295 out.go:298] Setting JSON to false
	I0805 16:47:19.799467    6295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4610,"bootTime":1722897029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:19.799546    6295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:19.803822    6295 out.go:177] * [embed-certs-842000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:19.810701    6295 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:19.810752    6295 notify.go:220] Checking for updates...
	I0805 16:47:19.816798    6295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:19.818159    6295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:19.820784    6295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:19.823801    6295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:19.826808    6295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:19.830136    6295 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:19.830212    6295 config.go:182] Loaded profile config "no-preload-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 16:47:19.830264    6295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:19.834805    6295 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:47:19.841800    6295 start.go:297] selected driver: qemu2
	I0805 16:47:19.841807    6295 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:47:19.841816    6295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:19.844087    6295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:47:19.846764    6295 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:47:19.849916    6295 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:19.849962    6295 cni.go:84] Creating CNI manager for ""
	I0805 16:47:19.849970    6295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:19.849974    6295 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:47:19.850000    6295 start.go:340] cluster config:
	{Name:embed-certs-842000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:19.853633    6295 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:19.860774    6295 out.go:177] * Starting "embed-certs-842000" primary control-plane node in "embed-certs-842000" cluster
	I0805 16:47:19.863789    6295 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:47:19.863805    6295 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:47:19.863816    6295 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:19.863881    6295 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:19.863887    6295 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:47:19.863954    6295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/embed-certs-842000/config.json ...
	I0805 16:47:19.863970    6295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/embed-certs-842000/config.json: {Name:mka3d76ae6ed90d04b1808aefb1a874f60dea1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:47:19.864181    6295 start.go:360] acquireMachinesLock for embed-certs-842000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:19.864214    6295 start.go:364] duration metric: took 27.334µs to acquireMachinesLock for "embed-certs-842000"
	I0805 16:47:19.864225    6295 start.go:93] Provisioning new machine with config: &{Name:embed-certs-842000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:19.864252    6295 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:19.872649    6295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:19.890282    6295 start.go:159] libmachine.API.Create for "embed-certs-842000" (driver="qemu2")
	I0805 16:47:19.890312    6295 client.go:168] LocalClient.Create starting
	I0805 16:47:19.890373    6295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:19.890407    6295 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:19.890414    6295 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:19.890453    6295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:19.890475    6295 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:19.890486    6295 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:19.890829    6295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:20.055443    6295 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:20.150892    6295 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:20.150897    6295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:20.151083    6295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:20.160345    6295 main.go:141] libmachine: STDOUT: 
	I0805 16:47:20.160363    6295 main.go:141] libmachine: STDERR: 
	I0805 16:47:20.160405    6295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2 +20000M
	I0805 16:47:20.168324    6295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:20.168342    6295 main.go:141] libmachine: STDERR: 
	I0805 16:47:20.168355    6295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:20.168360    6295 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:20.168370    6295 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:20.168399    6295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:45:44:27:7a:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:20.170068    6295 main.go:141] libmachine: STDOUT: 
	I0805 16:47:20.170083    6295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:20.170102    6295 client.go:171] duration metric: took 279.789083ms to LocalClient.Create
	I0805 16:47:22.172284    6295 start.go:128] duration metric: took 2.308053958s to createHost
	I0805 16:47:22.172363    6295 start.go:83] releasing machines lock for "embed-certs-842000", held for 2.308186125s
	W0805 16:47:22.172483    6295 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:22.191776    6295 out.go:177] * Deleting "embed-certs-842000" in qemu2 ...
	W0805 16:47:22.220384    6295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:22.220412    6295 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:27.222483    6295 start.go:360] acquireMachinesLock for embed-certs-842000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:27.238371    6295 start.go:364] duration metric: took 15.834167ms to acquireMachinesLock for "embed-certs-842000"
	I0805 16:47:27.238418    6295 start.go:93] Provisioning new machine with config: &{Name:embed-certs-842000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:27.238670    6295 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:27.251632    6295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:27.299419    6295 start.go:159] libmachine.API.Create for "embed-certs-842000" (driver="qemu2")
	I0805 16:47:27.299466    6295 client.go:168] LocalClient.Create starting
	I0805 16:47:27.299567    6295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:27.299637    6295 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:27.299656    6295 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:27.299718    6295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:27.299762    6295 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:27.299774    6295 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:27.300306    6295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:27.460985    6295 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:27.620196    6295 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:27.620211    6295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:27.620436    6295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:27.630864    6295 main.go:141] libmachine: STDOUT: 
	I0805 16:47:27.630887    6295 main.go:141] libmachine: STDERR: 
	I0805 16:47:27.630951    6295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2 +20000M
	I0805 16:47:27.639933    6295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:27.639956    6295 main.go:141] libmachine: STDERR: 
	I0805 16:47:27.639966    6295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:27.639971    6295 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:27.639986    6295 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:27.640018    6295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c9:a7:22:19:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:27.641861    6295 main.go:141] libmachine: STDOUT: 
	I0805 16:47:27.641881    6295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:27.641900    6295 client.go:171] duration metric: took 342.435917ms to LocalClient.Create
	I0805 16:47:29.644121    6295 start.go:128] duration metric: took 2.405470291s to createHost
	I0805 16:47:29.644172    6295 start.go:83] releasing machines lock for "embed-certs-842000", held for 2.405827792s
	W0805 16:47:29.644447    6295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:29.655972    6295 out.go:177] 
	W0805 16:47:29.660129    6295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:29.660175    6295 out.go:239] * 
	* 
	W0805 16:47:29.662435    6295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:29.670988    6295 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (49.260458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.917770375s)

                                                
                                                
-- stdout --
	* [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	* Restarting existing qemu2 VM for "no-preload-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:21.390397    6317 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:21.390536    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:21.390539    6317 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:21.390542    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:21.390664    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:21.391725    6317 out.go:298] Setting JSON to false
	I0805 16:47:21.407594    6317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4612,"bootTime":1722897029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:21.407660    6317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:21.412166    6317 out.go:177] * [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:21.419206    6317 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:21.419266    6317 notify.go:220] Checking for updates...
	I0805 16:47:21.426115    6317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:21.429161    6317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:21.432163    6317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:21.435129    6317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:21.438172    6317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:21.441612    6317 config.go:182] Loaded profile config "no-preload-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 16:47:21.441881    6317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:21.446104    6317 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:47:21.452130    6317 start.go:297] selected driver: qemu2
	I0805 16:47:21.452138    6317 start.go:901] validating driver "qemu2" against &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:21.452201    6317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:21.454512    6317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:21.454554    6317 cni.go:84] Creating CNI manager for ""
	I0805 16:47:21.454561    6317 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:21.454593    6317 start.go:340] cluster config:
	{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-265000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:21.458072    6317 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.466188    6317 out.go:177] * Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	I0805 16:47:21.470112    6317 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 16:47:21.470188    6317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/no-preload-265000/config.json ...
	I0805 16:47:21.470217    6317 cache.go:107] acquiring lock: {Name:mkc83aeebe6b3487ea0a0222042b3a14ab188be6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470244    6317 cache.go:107] acquiring lock: {Name:mkf7e8a98e2b8ae0466bc13132e3150172af1e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470213    6317 cache.go:107] acquiring lock: {Name:mkdb304c7bbd79570fe8e51264f4688630824a9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470297    6317 cache.go:107] acquiring lock: {Name:mk64803bbe3efb913b30bac942612863fb1b3464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470306    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 16:47:21.470320    6317 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 105.333µs
	I0805 16:47:21.470325    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 16:47:21.470332    6317 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.167µs
	I0805 16:47:21.470337    6317 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 16:47:21.470327    6317 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 16:47:21.470344    6317 cache.go:107] acquiring lock: {Name:mk73a4cf503166775a3cd6f2f7e1e12a189b6a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470366    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0805 16:47:21.470375    6317 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 131.25µs
	I0805 16:47:21.470379    6317 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0805 16:47:21.470366    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 16:47:21.470381    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 16:47:21.470385    6317 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 137.25µs
	I0805 16:47:21.470386    6317 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 41.917µs
	I0805 16:47:21.470388    6317 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 16:47:21.470390    6317 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 16:47:21.470353    6317 cache.go:107] acquiring lock: {Name:mk221df91522bfad87b4af8e6d4238022949f1ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470378    6317 cache.go:107] acquiring lock: {Name:mkb3d37007b4d9b676c2b96490b474b2202f547a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470433    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 16:47:21.470437    6317 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 101.25µs
	I0805 16:47:21.470441    6317 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 16:47:21.470445    6317 cache.go:107] acquiring lock: {Name:mk9dbc1c8335885d6f613fd756e258ab697de504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:21.470458    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0805 16:47:21.470464    6317 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 86.959µs
	I0805 16:47:21.470470    6317 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0805 16:47:21.470485    6317 cache.go:115] /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0805 16:47:21.470489    6317 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 65.292µs
	I0805 16:47:21.470494    6317 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0805 16:47:21.470499    6317 cache.go:87] Successfully saved all images to host disk.
	I0805 16:47:21.470645    6317 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:22.172498    6317 start.go:364] duration metric: took 701.846625ms to acquireMachinesLock for "no-preload-265000"
	I0805 16:47:22.172664    6317 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:22.172695    6317 fix.go:54] fixHost starting: 
	I0805 16:47:22.173348    6317 fix.go:112] recreateIfNeeded on no-preload-265000: state=Stopped err=<nil>
	W0805 16:47:22.173395    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:22.182772    6317 out.go:177] * Restarting existing qemu2 VM for "no-preload-265000" ...
	I0805 16:47:22.193313    6317 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:22.193531    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:8a:c8:6c:9f:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:22.202889    6317 main.go:141] libmachine: STDOUT: 
	I0805 16:47:22.202981    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:22.203120    6317 fix.go:56] duration metric: took 30.417292ms for fixHost
	I0805 16:47:22.203139    6317 start.go:83] releasing machines lock for "no-preload-265000", held for 30.589917ms
	W0805 16:47:22.203192    6317 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:22.203409    6317 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:22.203430    6317 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:27.205593    6317 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:27.206172    6317 start.go:364] duration metric: took 401.459µs to acquireMachinesLock for "no-preload-265000"
	I0805 16:47:27.206319    6317 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:27.206338    6317 fix.go:54] fixHost starting: 
	I0805 16:47:27.207128    6317 fix.go:112] recreateIfNeeded on no-preload-265000: state=Stopped err=<nil>
	W0805 16:47:27.207157    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:27.224595    6317 out.go:177] * Restarting existing qemu2 VM for "no-preload-265000" ...
	I0805 16:47:27.228627    6317 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:27.228829    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:8a:c8:6c:9f:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/no-preload-265000/disk.qcow2
	I0805 16:47:27.238133    6317 main.go:141] libmachine: STDOUT: 
	I0805 16:47:27.238201    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:27.238289    6317 fix.go:56] duration metric: took 31.942125ms for fixHost
	I0805 16:47:27.238304    6317 start.go:83] releasing machines lock for "no-preload-265000", held for 32.104958ms
	W0805 16:47:27.238462    6317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:27.255583    6317 out.go:177] 
	W0805 16:47:27.259696    6317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:27.259732    6317 out.go:239] * 
	* 
	W0805 16:47:27.261721    6317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:27.270639    6317 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (48.723958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-265000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (34.482917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-265000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.300833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (32.438583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-265000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (30.526667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1: exit status 83 (45.869791ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-265000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-265000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:27.538244    6341 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:27.538391    6341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:27.538398    6341 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:27.538400    6341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:27.538524    6341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:27.538775    6341 out.go:298] Setting JSON to false
	I0805 16:47:27.538782    6341 mustload.go:65] Loading cluster: no-preload-265000
	I0805 16:47:27.538967    6341 config.go:182] Loaded profile config "no-preload-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 16:47:27.543623    6341 out.go:177] * The control-plane node no-preload-265000 host is not running: state=Stopped
	I0805 16:47:27.547580    6341 out.go:177]   To start a cluster, run: "minikube start -p no-preload-265000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (28.782833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.119625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.464985084s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-624000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-624000" primary control-plane node in "default-k8s-diff-port-624000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-624000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:27.967572    6368 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:27.967720    6368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:27.967729    6368 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:27.967732    6368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:27.967884    6368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:27.969034    6368 out.go:298] Setting JSON to false
	I0805 16:47:27.985130    6368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4618,"bootTime":1722897029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:27.985201    6368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:27.989632    6368 out.go:177] * [default-k8s-diff-port-624000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:27.996470    6368 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:27.996511    6368 notify.go:220] Checking for updates...
	I0805 16:47:28.002574    6368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:28.004097    6368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:28.007553    6368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:28.010601    6368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:28.013600    6368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:28.016949    6368 config.go:182] Loaded profile config "embed-certs-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:28.017017    6368 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:28.017068    6368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:28.021646    6368 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:47:28.028534    6368 start.go:297] selected driver: qemu2
	I0805 16:47:28.028540    6368 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:47:28.028545    6368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:28.030798    6368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:47:28.033593    6368 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:47:28.036633    6368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:28.036661    6368 cni.go:84] Creating CNI manager for ""
	I0805 16:47:28.036668    6368 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:28.036673    6368 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:47:28.036706    6368 start.go:340] cluster config:
	{Name:default-k8s-diff-port-624000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:28.040486    6368 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:28.048616    6368 out.go:177] * Starting "default-k8s-diff-port-624000" primary control-plane node in "default-k8s-diff-port-624000" cluster
	I0805 16:47:28.051567    6368 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:47:28.051582    6368 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:47:28.051591    6368 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:28.051666    6368 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:28.051673    6368 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:47:28.051739    6368 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/default-k8s-diff-port-624000/config.json ...
	I0805 16:47:28.051755    6368 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/default-k8s-diff-port-624000/config.json: {Name:mk4b7296c96426bc43877b6268e3240970a2da25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:47:28.052177    6368 start.go:360] acquireMachinesLock for default-k8s-diff-port-624000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:29.644384    6368 start.go:364] duration metric: took 1.59221425s to acquireMachinesLock for "default-k8s-diff-port-624000"
	I0805 16:47:29.644539    6368 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:29.644719    6368 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:29.653036    6368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:29.701071    6368 start.go:159] libmachine.API.Create for "default-k8s-diff-port-624000" (driver="qemu2")
	I0805 16:47:29.701120    6368 client.go:168] LocalClient.Create starting
	I0805 16:47:29.701222    6368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:29.701281    6368 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:29.701299    6368 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:29.701367    6368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:29.701416    6368 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:29.701433    6368 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:29.702035    6368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:29.893131    6368 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:29.969648    6368 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:29.969662    6368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:29.969833    6368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:29.979289    6368 main.go:141] libmachine: STDOUT: 
	I0805 16:47:29.979319    6368 main.go:141] libmachine: STDERR: 
	I0805 16:47:29.979411    6368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2 +20000M
	I0805 16:47:29.988498    6368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:29.988527    6368 main.go:141] libmachine: STDERR: 
	I0805 16:47:29.988541    6368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:29.988546    6368 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:29.988555    6368 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:29.988588    6368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d1:99:52:8d:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:29.990979    6368 main.go:141] libmachine: STDOUT: 
	I0805 16:47:29.990998    6368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:29.991018    6368 client.go:171] duration metric: took 289.89725ms to LocalClient.Create
	I0805 16:47:31.993192    6368 start.go:128] duration metric: took 2.348485459s to createHost
	I0805 16:47:31.993255    6368 start.go:83] releasing machines lock for "default-k8s-diff-port-624000", held for 2.348884041s
	W0805 16:47:31.993311    6368 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:32.002766    6368 out.go:177] * Deleting "default-k8s-diff-port-624000" in qemu2 ...
	W0805 16:47:32.027883    6368 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:32.027908    6368 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:37.029352    6368 start.go:360] acquireMachinesLock for default-k8s-diff-port-624000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:37.029870    6368 start.go:364] duration metric: took 391.667µs to acquireMachinesLock for "default-k8s-diff-port-624000"
	I0805 16:47:37.030023    6368 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:37.030319    6368 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:37.038982    6368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:37.087149    6368 start.go:159] libmachine.API.Create for "default-k8s-diff-port-624000" (driver="qemu2")
	I0805 16:47:37.087195    6368 client.go:168] LocalClient.Create starting
	I0805 16:47:37.087294    6368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:37.087368    6368 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:37.087388    6368 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:37.087451    6368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:37.087501    6368 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:37.087514    6368 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:37.088170    6368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:37.252226    6368 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:37.328189    6368 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:37.328194    6368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:37.328395    6368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:37.337767    6368 main.go:141] libmachine: STDOUT: 
	I0805 16:47:37.337785    6368 main.go:141] libmachine: STDERR: 
	I0805 16:47:37.337838    6368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2 +20000M
	I0805 16:47:37.345740    6368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:37.345755    6368 main.go:141] libmachine: STDERR: 
	I0805 16:47:37.345776    6368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:37.345781    6368 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:37.345791    6368 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:37.345818    6368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:72:f7:2b:25:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:37.347431    6368 main.go:141] libmachine: STDOUT: 
	I0805 16:47:37.347450    6368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:37.347462    6368 client.go:171] duration metric: took 260.267ms to LocalClient.Create
	I0805 16:47:39.348707    6368 start.go:128] duration metric: took 2.318363792s to createHost
	I0805 16:47:39.348794    6368 start.go:83] releasing machines lock for "default-k8s-diff-port-624000", held for 2.318927s
	W0805 16:47:39.349147    6368 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:39.360747    6368 out.go:177] 
	W0805 16:47:39.370766    6368 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:39.370794    6368 out.go:239] * 
	* 
	W0805 16:47:39.373280    6368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:39.385740    6368 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (63.895792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-842000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-842000 create -f testdata/busybox.yaml: exit status 1 (30.607292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-842000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (32.028916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (32.434958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-842000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-842000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-842000 describe deploy/metrics-server -n kube-system: exit status 1 (26.8875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-842000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (29.468417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.049357583s)

                                                
                                                
-- stdout --
	* [embed-certs-842000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-842000" primary control-plane node in "embed-certs-842000" cluster
	* Restarting existing qemu2 VM for "embed-certs-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:33.404193    6420 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:33.404321    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:33.404325    6420 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:33.404327    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:33.404457    6420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:33.405475    6420 out.go:298] Setting JSON to false
	I0805 16:47:33.421392    6420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4624,"bootTime":1722897029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:33.421482    6420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:33.426733    6420 out.go:177] * [embed-certs-842000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:33.433704    6420 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:33.433735    6420 notify.go:220] Checking for updates...
	I0805 16:47:33.440698    6420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:33.443669    6420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:33.446695    6420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:33.449634    6420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:33.452667    6420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:33.455986    6420 config.go:182] Loaded profile config "embed-certs-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:33.456230    6420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:33.459640    6420 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:47:33.466683    6420 start.go:297] selected driver: qemu2
	I0805 16:47:33.466689    6420 start.go:901] validating driver "qemu2" against &{Name:embed-certs-842000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:33.466748    6420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:33.469032    6420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:33.469069    6420 cni.go:84] Creating CNI manager for ""
	I0805 16:47:33.469075    6420 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:33.469098    6420 start.go:340] cluster config:
	{Name:embed-certs-842000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-842000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:33.472636    6420 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:33.480680    6420 out.go:177] * Starting "embed-certs-842000" primary control-plane node in "embed-certs-842000" cluster
	I0805 16:47:33.484725    6420 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:47:33.484740    6420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:47:33.484753    6420 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:33.484810    6420 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:33.484822    6420 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:47:33.484892    6420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/embed-certs-842000/config.json ...
	I0805 16:47:33.485414    6420 start.go:360] acquireMachinesLock for embed-certs-842000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:33.485453    6420 start.go:364] duration metric: took 33.125µs to acquireMachinesLock for "embed-certs-842000"
	I0805 16:47:33.485461    6420 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:33.485469    6420 fix.go:54] fixHost starting: 
	I0805 16:47:33.485594    6420 fix.go:112] recreateIfNeeded on embed-certs-842000: state=Stopped err=<nil>
	W0805 16:47:33.485604    6420 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:33.489699    6420 out.go:177] * Restarting existing qemu2 VM for "embed-certs-842000" ...
	I0805 16:47:33.497656    6420 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:33.497691    6420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c9:a7:22:19:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:33.499730    6420 main.go:141] libmachine: STDOUT: 
	I0805 16:47:33.499747    6420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:33.499784    6420 fix.go:56] duration metric: took 14.316875ms for fixHost
	I0805 16:47:33.499789    6420 start.go:83] releasing machines lock for "embed-certs-842000", held for 14.33175ms
	W0805 16:47:33.499795    6420 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:33.499843    6420 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:33.499848    6420 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:38.502173    6420 start.go:360] acquireMachinesLock for embed-certs-842000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:39.348983    6420 start.go:364] duration metric: took 846.711542ms to acquireMachinesLock for "embed-certs-842000"
	I0805 16:47:39.349179    6420 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:39.349194    6420 fix.go:54] fixHost starting: 
	I0805 16:47:39.349907    6420 fix.go:112] recreateIfNeeded on embed-certs-842000: state=Stopped err=<nil>
	W0805 16:47:39.349933    6420 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:39.367718    6420 out.go:177] * Restarting existing qemu2 VM for "embed-certs-842000" ...
	I0805 16:47:39.373726    6420 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:39.373954    6420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c9:a7:22:19:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/embed-certs-842000/disk.qcow2
	I0805 16:47:39.382855    6420 main.go:141] libmachine: STDOUT: 
	I0805 16:47:39.382910    6420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:39.382982    6420 fix.go:56] duration metric: took 33.787833ms for fixHost
	I0805 16:47:39.382997    6420 start.go:83] releasing machines lock for "embed-certs-842000", held for 33.976167ms
	W0805 16:47:39.383137    6420 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:39.397681    6420 out.go:177] 
	W0805 16:47:39.401785    6420 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:39.401814    6420 out.go:239] * 
	* 
	W0805 16:47:39.403771    6420 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:39.411732    6420 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-842000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (49.978916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-624000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-624000 create -f testdata/busybox.yaml: exit status 1 (31.269375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-624000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-624000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (29.840084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (34.078208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-842000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (32.535041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-842000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.661166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (31.22325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-624000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-624000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-624000 describe deploy/metrics-server -n kube-system: exit status 1 (28.804958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-624000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-624000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (29.9095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-842000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (30.195125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-842000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-842000 --alsologtostderr -v=1: exit status 83 (49.394333ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-842000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-842000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:39.682732    6459 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:39.682901    6459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:39.682904    6459 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:39.682906    6459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:39.683045    6459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:39.683266    6459 out.go:298] Setting JSON to false
	I0805 16:47:39.683276    6459 mustload.go:65] Loading cluster: embed-certs-842000
	I0805 16:47:39.683488    6459 config.go:182] Loaded profile config "embed-certs-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:39.687693    6459 out.go:177] * The control-plane node embed-certs-842000 host is not running: state=Stopped
	I0805 16:47:39.695660    6459 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-842000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-842000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (30.212166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (28.4005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.868856625s)

                                                
                                                
-- stdout --
	* [newest-cni-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-608000" primary control-plane node in "newest-cni-608000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-608000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:39.994041    6484 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:39.994160    6484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:39.994163    6484 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:39.994166    6484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:39.994285    6484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:39.995289    6484 out.go:298] Setting JSON to false
	I0805 16:47:40.011130    6484 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4631,"bootTime":1722897029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:40.011202    6484 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:40.015684    6484 out.go:177] * [newest-cni-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:40.022719    6484 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:40.022786    6484 notify.go:220] Checking for updates...
	I0805 16:47:40.029785    6484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:40.032725    6484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:40.035748    6484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:40.038713    6484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:40.041683    6484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:40.045019    6484 config.go:182] Loaded profile config "default-k8s-diff-port-624000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:40.045083    6484 config.go:182] Loaded profile config "multinode-860000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:40.045147    6484 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:40.049650    6484 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 16:47:40.056698    6484 start.go:297] selected driver: qemu2
	I0805 16:47:40.056703    6484 start.go:901] validating driver "qemu2" against <nil>
	I0805 16:47:40.056709    6484 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:40.058951    6484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0805 16:47:40.058973    6484 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0805 16:47:40.065658    6484 out.go:177] * Automatically selected the socket_vmnet network
	I0805 16:47:40.068813    6484 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 16:47:40.068850    6484 cni.go:84] Creating CNI manager for ""
	I0805 16:47:40.068867    6484 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:40.068870    6484 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:47:40.068896    6484 start.go:340] cluster config:
	{Name:newest-cni-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:40.072617    6484 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:40.080670    6484 out.go:177] * Starting "newest-cni-608000" primary control-plane node in "newest-cni-608000" cluster
	I0805 16:47:40.084717    6484 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 16:47:40.084738    6484 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 16:47:40.084746    6484 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:40.084808    6484 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:40.084814    6484 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 16:47:40.084894    6484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/newest-cni-608000/config.json ...
	I0805 16:47:40.084905    6484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/newest-cni-608000/config.json: {Name:mk2c1224d71bb3915667fc95f4afc3a98b4a658d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:47:40.085142    6484 start.go:360] acquireMachinesLock for newest-cni-608000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:40.085191    6484 start.go:364] duration metric: took 43.667µs to acquireMachinesLock for "newest-cni-608000"
	I0805 16:47:40.085203    6484 start.go:93] Provisioning new machine with config: &{Name:newest-cni-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:40.085231    6484 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:40.093695    6484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:40.112415    6484 start.go:159] libmachine.API.Create for "newest-cni-608000" (driver="qemu2")
	I0805 16:47:40.112452    6484 client.go:168] LocalClient.Create starting
	I0805 16:47:40.112519    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:40.112548    6484 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:40.112559    6484 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:40.112595    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:40.112619    6484 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:40.112625    6484 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:40.113053    6484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:40.267981    6484 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:40.322730    6484 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:40.322736    6484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:40.322901    6484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:40.332108    6484 main.go:141] libmachine: STDOUT: 
	I0805 16:47:40.332124    6484 main.go:141] libmachine: STDERR: 
	I0805 16:47:40.332179    6484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2 +20000M
	I0805 16:47:40.340162    6484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:40.340177    6484 main.go:141] libmachine: STDERR: 
	I0805 16:47:40.340198    6484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:40.340203    6484 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:40.340215    6484 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:40.340243    6484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:62:67:44:45:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:40.341839    6484 main.go:141] libmachine: STDOUT: 
	I0805 16:47:40.341852    6484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:40.341869    6484 client.go:171] duration metric: took 229.417667ms to LocalClient.Create
	I0805 16:47:42.344045    6484 start.go:128] duration metric: took 2.258834875s to createHost
	I0805 16:47:42.344114    6484 start.go:83] releasing machines lock for "newest-cni-608000", held for 2.258958458s
	W0805 16:47:42.344183    6484 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:42.351465    6484 out.go:177] * Deleting "newest-cni-608000" in qemu2 ...
	W0805 16:47:42.385054    6484 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:42.385077    6484 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:47.387242    6484 start.go:360] acquireMachinesLock for newest-cni-608000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:47.387807    6484 start.go:364] duration metric: took 378.916µs to acquireMachinesLock for "newest-cni-608000"
	I0805 16:47:47.387916    6484 start.go:93] Provisioning new machine with config: &{Name:newest-cni-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:47:47.388194    6484 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 16:47:47.397825    6484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:47:47.450111    6484 start.go:159] libmachine.API.Create for "newest-cni-608000" (driver="qemu2")
	I0805 16:47:47.450159    6484 client.go:168] LocalClient.Create starting
	I0805 16:47:47.450266    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/ca.pem
	I0805 16:47:47.450324    6484 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:47.450340    6484 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:47.450402    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1054/.minikube/certs/cert.pem
	I0805 16:47:47.450463    6484 main.go:141] libmachine: Decoding PEM data...
	I0805 16:47:47.450476    6484 main.go:141] libmachine: Parsing certificate...
	I0805 16:47:47.451108    6484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 16:47:47.611918    6484 main.go:141] libmachine: Creating SSH key...
	I0805 16:47:47.754652    6484 main.go:141] libmachine: Creating Disk image...
	I0805 16:47:47.754659    6484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 16:47:47.754844    6484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:47.764471    6484 main.go:141] libmachine: STDOUT: 
	I0805 16:47:47.764496    6484 main.go:141] libmachine: STDERR: 
	I0805 16:47:47.764554    6484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2 +20000M
	I0805 16:47:47.772439    6484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 16:47:47.772454    6484 main.go:141] libmachine: STDERR: 
	I0805 16:47:47.772465    6484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:47.772470    6484 main.go:141] libmachine: Starting QEMU VM...
	I0805 16:47:47.772492    6484 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:47.772521    6484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6f:11:d0:a9:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:47.774160    6484 main.go:141] libmachine: STDOUT: 
	I0805 16:47:47.774186    6484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:47.774210    6484 client.go:171] duration metric: took 324.052709ms to LocalClient.Create
	I0805 16:47:49.776349    6484 start.go:128] duration metric: took 2.388168625s to createHost
	I0805 16:47:49.776402    6484 start.go:83] releasing machines lock for "newest-cni-608000", held for 2.38862075s
	W0805 16:47:49.776801    6484 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:49.791430    6484 out.go:177] 
	W0805 16:47:49.797407    6484 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:49.797452    6484 out.go:239] * 
	* 
	W0805 16:47:49.800189    6484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:49.811496    6484 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (65.862709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-608000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
E0805 16:47:48.829807    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.71634525s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-624000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-624000" primary control-plane node in "default-k8s-diff-port-624000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-624000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-624000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:43.162873    6512 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:43.163251    6512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:43.163255    6512 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:43.163257    6512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:43.163431    6512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:43.164749    6512 out.go:298] Setting JSON to false
	I0805 16:47:43.180990    6512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4634,"bootTime":1722897029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:43.181061    6512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:43.185798    6512 out.go:177] * [default-k8s-diff-port-624000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:43.191785    6512 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:43.191825    6512 notify.go:220] Checking for updates...
	I0805 16:47:43.198727    6512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:43.201734    6512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:43.204784    6512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:43.206362    6512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:43.209745    6512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:43.213066    6512 config.go:182] Loaded profile config "default-k8s-diff-port-624000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:43.213332    6512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:43.217628    6512 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:47:43.224750    6512 start.go:297] selected driver: qemu2
	I0805 16:47:43.224759    6512 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:43.224833    6512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:43.227088    6512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:47:43.227140    6512 cni.go:84] Creating CNI manager for ""
	I0805 16:47:43.227150    6512 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:43.227185    6512 start.go:340] cluster config:
	{Name:default-k8s-diff-port-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-624000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:43.230690    6512 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:43.238734    6512 out.go:177] * Starting "default-k8s-diff-port-624000" primary control-plane node in "default-k8s-diff-port-624000" cluster
	I0805 16:47:43.242832    6512 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:47:43.242849    6512 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 16:47:43.242860    6512 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:43.242946    6512 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:43.242952    6512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:47:43.243013    6512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/default-k8s-diff-port-624000/config.json ...
	I0805 16:47:43.243521    6512 start.go:360] acquireMachinesLock for default-k8s-diff-port-624000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:43.243557    6512 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "default-k8s-diff-port-624000"
	I0805 16:47:43.243565    6512 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:43.243575    6512 fix.go:54] fixHost starting: 
	I0805 16:47:43.243695    6512 fix.go:112] recreateIfNeeded on default-k8s-diff-port-624000: state=Stopped err=<nil>
	W0805 16:47:43.243703    6512 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:43.247758    6512 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-624000" ...
	I0805 16:47:43.255722    6512 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:43.255757    6512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:72:f7:2b:25:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:43.257901    6512 main.go:141] libmachine: STDOUT: 
	I0805 16:47:43.257922    6512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:43.257959    6512 fix.go:56] duration metric: took 14.387125ms for fixHost
	I0805 16:47:43.257965    6512 start.go:83] releasing machines lock for "default-k8s-diff-port-624000", held for 14.403584ms
	W0805 16:47:43.257971    6512 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:43.258007    6512 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:43.258011    6512 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:48.260244    6512 start.go:360] acquireMachinesLock for default-k8s-diff-port-624000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:49.776596    6512 start.go:364] duration metric: took 1.516263042s to acquireMachinesLock for "default-k8s-diff-port-624000"
	I0805 16:47:49.776764    6512 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:49.776784    6512 fix.go:54] fixHost starting: 
	I0805 16:47:49.777572    6512 fix.go:112] recreateIfNeeded on default-k8s-diff-port-624000: state=Stopped err=<nil>
	W0805 16:47:49.777601    6512 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:49.794344    6512 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-624000" ...
	I0805 16:47:49.800306    6512 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:49.800530    6512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:72:f7:2b:25:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/default-k8s-diff-port-624000/disk.qcow2
	I0805 16:47:49.809261    6512 main.go:141] libmachine: STDOUT: 
	I0805 16:47:49.809323    6512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:49.809396    6512 fix.go:56] duration metric: took 32.612125ms for fixHost
	I0805 16:47:49.809414    6512 start.go:83] releasing machines lock for "default-k8s-diff-port-624000", held for 32.780292ms
	W0805 16:47:49.809627    6512 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-624000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-624000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:49.823322    6512 out.go:177] 
	W0805 16:47:49.827485    6512 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:49.827519    6512 out.go:239] * 
	* 
	W0805 16:47:49.830111    6512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:49.844509    6512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-624000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (56.9315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-624000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (35.213458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-624000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-624000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-624000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.493542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-624000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-624000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (32.519208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-624000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (28.592333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-624000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-624000 --alsologtostderr -v=1: exit status 83 (39.948ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-624000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-624000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:50.096539    6543 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:50.096697    6543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:50.096700    6543 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:50.096703    6543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:50.096838    6543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:50.097046    6543 out.go:298] Setting JSON to false
	I0805 16:47:50.097052    6543 mustload.go:65] Loading cluster: default-k8s-diff-port-624000
	I0805 16:47:50.097239    6543 config.go:182] Loaded profile config "default-k8s-diff-port-624000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:47:50.101409    6543 out.go:177] * The control-plane node default-k8s-diff-port-624000 host is not running: state=Stopped
	I0805 16:47:50.105346    6543 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-624000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-624000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (28.009459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (28.89275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.184261542s)

                                                
                                                
-- stdout --
	* [newest-cni-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-608000" primary control-plane node in "newest-cni-608000" cluster
	* Restarting existing qemu2 VM for "newest-cni-608000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-608000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:53.834029    6580 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:53.834161    6580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:53.834164    6580 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:53.834167    6580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:53.834300    6580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:53.835326    6580 out.go:298] Setting JSON to false
	I0805 16:47:53.851537    6580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4644,"bootTime":1722897029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 16:47:53.851609    6580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:47:53.856847    6580 out.go:177] * [newest-cni-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 16:47:53.864893    6580 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:47:53.864950    6580 notify.go:220] Checking for updates...
	I0805 16:47:53.871809    6580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 16:47:53.874833    6580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 16:47:53.877857    6580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:47:53.880836    6580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 16:47:53.883833    6580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:47:53.887055    6580 config.go:182] Loaded profile config "newest-cni-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 16:47:53.887341    6580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:47:53.891911    6580 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 16:47:53.898745    6580 start.go:297] selected driver: qemu2
	I0805 16:47:53.898752    6580 start.go:901] validating driver "qemu2" against &{Name:newest-cni-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:53.898805    6580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:47:53.901354    6580 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 16:47:53.901380    6580 cni.go:84] Creating CNI manager for ""
	I0805 16:47:53.901387    6580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:47:53.901426    6580 start.go:340] cluster config:
	{Name:newest-cni-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-608000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:47:53.905096    6580 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:47:53.912706    6580 out.go:177] * Starting "newest-cni-608000" primary control-plane node in "newest-cni-608000" cluster
	I0805 16:47:53.916842    6580 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 16:47:53.916858    6580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 16:47:53.916866    6580 cache.go:56] Caching tarball of preloaded images
	I0805 16:47:53.916923    6580 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 16:47:53.916928    6580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 16:47:53.917004    6580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/newest-cni-608000/config.json ...
	I0805 16:47:53.917519    6580 start.go:360] acquireMachinesLock for newest-cni-608000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:53.917560    6580 start.go:364] duration metric: took 34.916µs to acquireMachinesLock for "newest-cni-608000"
	I0805 16:47:53.917568    6580 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:53.917576    6580 fix.go:54] fixHost starting: 
	I0805 16:47:53.917708    6580 fix.go:112] recreateIfNeeded on newest-cni-608000: state=Stopped err=<nil>
	W0805 16:47:53.917717    6580 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:53.920808    6580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-608000" ...
	I0805 16:47:53.928799    6580 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:53.928831    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6f:11:d0:a9:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:53.930896    6580 main.go:141] libmachine: STDOUT: 
	I0805 16:47:53.930917    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:53.930945    6580 fix.go:56] duration metric: took 13.371042ms for fixHost
	I0805 16:47:53.930950    6580 start.go:83] releasing machines lock for "newest-cni-608000", held for 13.386375ms
	W0805 16:47:53.930956    6580 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:53.930985    6580 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:53.930990    6580 start.go:729] Will try again in 5 seconds ...
	I0805 16:47:58.933053    6580 start.go:360] acquireMachinesLock for newest-cni-608000: {Name:mk5709b56dff6cc8a40d5a6670b638ad6110d546 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:47:58.933464    6580 start.go:364] duration metric: took 305µs to acquireMachinesLock for "newest-cni-608000"
	I0805 16:47:58.933595    6580 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:47:58.933616    6580 fix.go:54] fixHost starting: 
	I0805 16:47:58.934339    6580 fix.go:112] recreateIfNeeded on newest-cni-608000: state=Stopped err=<nil>
	W0805 16:47:58.934370    6580 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:47:58.944014    6580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-608000" ...
	I0805 16:47:58.947943    6580 qemu.go:418] Using hvf for hardware acceleration
	I0805 16:47:58.948177    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6f:11:d0:a9:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19373-1054/.minikube/machines/newest-cni-608000/disk.qcow2
	I0805 16:47:58.957128    6580 main.go:141] libmachine: STDOUT: 
	I0805 16:47:58.957190    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 16:47:58.957286    6580 fix.go:56] duration metric: took 23.672791ms for fixHost
	I0805 16:47:58.957303    6580 start.go:83] releasing machines lock for "newest-cni-608000", held for 23.815583ms
	W0805 16:47:58.957519    6580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-608000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-608000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 16:47:58.964964    6580 out.go:177] 
	W0805 16:47:58.968991    6580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 16:47:58.969023    6580 out.go:239] * 
	* 
	W0805 16:47:58.971383    6580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:47:58.978989    6580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-608000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (69.519875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-608000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-608000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (28.741875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-608000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-608000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-608000 --alsologtostderr -v=1: exit status 83 (45.967167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-608000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-608000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:47:59.158675    6596 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:47:59.158843    6596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:59.158847    6596 out.go:304] Setting ErrFile to fd 2...
	I0805 16:47:59.158849    6596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:47:59.158977    6596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 16:47:59.159195    6596 out.go:298] Setting JSON to false
	I0805 16:47:59.159205    6596 mustload.go:65] Loading cluster: newest-cni-608000
	I0805 16:47:59.159395    6596 config.go:182] Loaded profile config "newest-cni-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 16:47:59.167253    6596 out.go:177] * The control-plane node newest-cni-608000 host is not running: state=Stopped
	I0805 16:47:59.171749    6596 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-608000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-608000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (28.802208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-608000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (29.250792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-608000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 12.24
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 17.97
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 207.71
38 TestAddons/serial/Volcano 37.97
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.71
43 TestAddons/parallel/Ingress 17.98
44 TestAddons/parallel/InspektorGadget 10.24
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 59.4
49 TestAddons/parallel/Headlamp 15.55
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 51.78
52 TestAddons/parallel/NvidiaDevicePlugin 5.16
53 TestAddons/parallel/Yakd 10.21
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.44
65 TestErrorSpam/setup 33.62
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.67
69 TestErrorSpam/unpause 0.6
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 49.18
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.3
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.59
82 TestFunctional/serial/CacheCmd/cache/add_local 1.09
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.65
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 35.34
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.59
94 TestFunctional/serial/InvalidService 3.68
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 9.52
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.87
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.4
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.42
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
120 TestFunctional/parallel/License 0.41
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.18
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.08
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
136 TestFunctional/parallel/ServiceCmd/Format 0.09
137 TestFunctional/parallel/ServiceCmd/URL 0.09
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 5.14
142 TestFunctional/parallel/MountCmd/specific-port 1.22
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.65
144 TestFunctional/parallel/Version/short 0.04
145 TestFunctional/parallel/Version/components 0.16
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.61
151 TestFunctional/parallel/ImageCommands/Setup 1.75
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.55
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.22
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
159 TestFunctional/parallel/DockerEnv/bash 0.26
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 199.28
170 TestMultiControlPlane/serial/DeployApp 5.98
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 56.25
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
175 TestMultiControlPlane/serial/CopyFile 4.35
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.08
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 3.95
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 0.95
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.33
282 TestNoKubernetes/serial/Stop 3.36
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
299 TestStartStop/group/old-k8s-version/serial/Stop 3.6
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/no-preload/serial/Stop 2.85
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/embed-certs/serial/Stop 3.29
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.32
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
341 TestStartStop/group/newest-cni/serial/Stop 3.71
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-532000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-532000: exit status 85 (96.185292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-532000 | jenkins | v1.33.1 | 05 Aug 24 15:46 PDT |          |
	|         | -p download-only-532000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:46:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:46:49.744180    1553 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:46:49.744355    1553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:46:49.744358    1553 out.go:304] Setting ErrFile to fd 2...
	I0805 15:46:49.744360    1553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:46:49.744493    1553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	W0805 15:46:49.744582    1553 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19373-1054/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19373-1054/.minikube/config/config.json: no such file or directory
	I0805 15:46:49.745966    1553 out.go:298] Setting JSON to true
	I0805 15:46:49.765747    1553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":980,"bootTime":1722897029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:46:49.765846    1553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:46:49.772569    1553 out.go:97] [download-only-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:46:49.772726    1553 notify.go:220] Checking for updates...
	W0805 15:46:49.772734    1553 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 15:46:49.776493    1553 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:46:49.779384    1553 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:46:49.785611    1553 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:46:49.788517    1553 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:46:49.792216    1553 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	W0805 15:46:49.799527    1553 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:46:49.799771    1553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:46:49.804174    1553 out.go:97] Using the qemu2 driver based on user configuration
	I0805 15:46:49.804192    1553 start.go:297] selected driver: qemu2
	I0805 15:46:49.804206    1553 start.go:901] validating driver "qemu2" against <nil>
	I0805 15:46:49.804257    1553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:46:49.809104    1553 out.go:169] Automatically selected the socket_vmnet network
	I0805 15:46:49.815727    1553 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 15:46:49.815819    1553 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:46:49.815882    1553 cni.go:84] Creating CNI manager for ""
	I0805 15:46:49.815899    1553 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 15:46:49.815951    1553 start.go:340] cluster config:
	{Name:download-only-532000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:46:49.821429    1553 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:46:49.826135    1553 out.go:97] Downloading VM boot image ...
	I0805 15:46:49.826152    1553 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 15:46:58.636573    1553 out.go:97] Starting "download-only-532000" primary control-plane node in "download-only-532000" cluster
	I0805 15:46:58.636602    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:46:58.696082    1553 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 15:46:58.696103    1553 cache.go:56] Caching tarball of preloaded images
	I0805 15:46:58.696317    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:46:58.701442    1553 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 15:46:58.701450    1553 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:46:58.797960    1553 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 15:47:05.804579    1553 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:05.804765    1553 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:06.501130    1553 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 15:47:06.501333    1553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-532000/config.json ...
	I0805 15:47:06.501352    1553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-532000/config.json: {Name:mk3cabbd89337a06e6e35d69d98fb82611c24728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:06.501612    1553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:47:06.501820    1553 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 15:47:07.088151    1553 out.go:169] 
	W0805 15:47:07.092282    1553 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20 0x107035d20] Decompressors:map[bz2:0x1400000f5e0 gz:0x1400000f5e8 tar:0x1400000f590 tar.bz2:0x1400000f5a0 tar.gz:0x1400000f5b0 tar.xz:0x1400000f5c0 tar.zst:0x1400000f5d0 tbz2:0x1400000f5a0 tgz:0x1400000f5b0 txz:0x1400000f5c0 tzst:0x1400000f5d0 xz:0x1400000f5f0 zip:0x1400000f600 zst:0x1400000f5f8] Getters:map[file:0x1400089c850 http:0x14000816320 https:0x14000816370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 15:47:07.092308    1553 out_reason.go:110] 
	W0805 15:47:07.100255    1553 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 15:47:07.104051    1553 out.go:169] 
	
	
	* The control-plane node download-only-532000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-532000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-532000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-919000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-919000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (12.243170959s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-919000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-919000: exit status 85 (77.116666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-532000 | jenkins | v1.33.1 | 05 Aug 24 15:46 PDT |                     |
	|         | -p download-only-532000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-532000        | download-only-532000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only        | download-only-919000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-919000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:47:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:47:07.513582    1578 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:47:07.513716    1578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:07.513720    1578 out.go:304] Setting ErrFile to fd 2...
	I0805 15:47:07.513722    1578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:07.513848    1578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 15:47:07.514885    1578 out.go:298] Setting JSON to true
	I0805 15:47:07.530887    1578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":998,"bootTime":1722897029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:47:07.530951    1578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:47:07.535179    1578 out.go:97] [download-only-919000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:47:07.535272    1578 notify.go:220] Checking for updates...
	I0805 15:47:07.539214    1578 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:47:07.542181    1578 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:47:07.546144    1578 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:47:07.549187    1578 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:47:07.552071    1578 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	W0805 15:47:07.558194    1578 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:47:07.558361    1578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:47:07.559797    1578 out.go:97] Using the qemu2 driver based on user configuration
	I0805 15:47:07.559804    1578 start.go:297] selected driver: qemu2
	I0805 15:47:07.559808    1578 start.go:901] validating driver "qemu2" against <nil>
	I0805 15:47:07.559849    1578 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:47:07.563119    1578 out.go:169] Automatically selected the socket_vmnet network
	I0805 15:47:07.568325    1578 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 15:47:07.568417    1578 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:47:07.568450    1578 cni.go:84] Creating CNI manager for ""
	I0805 15:47:07.568458    1578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 15:47:07.568463    1578 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 15:47:07.568510    1578 start.go:340] cluster config:
	{Name:download-only-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-919000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:47:07.571919    1578 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:07.575131    1578 out.go:97] Starting "download-only-919000" primary control-plane node in "download-only-919000" cluster
	I0805 15:47:07.575137    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:07.636390    1578 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 15:47:07.636412    1578 cache.go:56] Caching tarball of preloaded images
	I0805 15:47:07.636577    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:07.641769    1578 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 15:47:07.641777    1578 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:07.716452    1578 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 15:47:13.247306    1578 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:13.247488    1578 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:13.793086    1578 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 15:47:13.793277    1578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-919000/config.json ...
	I0805 15:47:13.793292    1578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-919000/config.json: {Name:mka015f1f2597df06f3103b1fc360cfe62aa3780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:13.793556    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:13.793683    1578 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-919000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-919000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-919000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (17.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-697000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-697000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (17.972687167s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (17.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-697000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-697000: exit status 85 (75.831208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-532000 | jenkins | v1.33.1 | 05 Aug 24 15:46 PDT |                     |
	|         | -p download-only-532000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-532000           | download-only-532000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-919000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-919000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-919000           | download-only-919000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-697000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-697000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:47:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:47:20.046779    1601 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:47:20.046908    1601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:20.046911    1601 out.go:304] Setting ErrFile to fd 2...
	I0805 15:47:20.046914    1601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:20.047047    1601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 15:47:20.048151    1601 out.go:298] Setting JSON to true
	I0805 15:47:20.064026    1601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1011,"bootTime":1722897029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:47:20.064115    1601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:47:20.068995    1601 out.go:97] [download-only-697000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:47:20.069094    1601 notify.go:220] Checking for updates...
	I0805 15:47:20.072801    1601 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:47:20.076986    1601 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:47:20.081051    1601 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:47:20.083951    1601 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:47:20.087016    1601 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	W0805 15:47:20.126027    1601 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:47:20.126258    1601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:47:20.129998    1601 out.go:97] Using the qemu2 driver based on user configuration
	I0805 15:47:20.130009    1601 start.go:297] selected driver: qemu2
	I0805 15:47:20.130014    1601 start.go:901] validating driver "qemu2" against <nil>
	I0805 15:47:20.130074    1601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:47:20.133037    1601 out.go:169] Automatically selected the socket_vmnet network
	I0805 15:47:20.136842    1601 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 15:47:20.136944    1601 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:47:20.136981    1601 cni.go:84] Creating CNI manager for ""
	I0805 15:47:20.136989    1601 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 15:47:20.136995    1601 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 15:47:20.137035    1601 start.go:340] cluster config:
	{Name:download-only-697000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-697000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:47:20.140951    1601 iso.go:125] acquiring lock: {Name:mk74792ac5f24ed4daf8ac0dec639fc320caa2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:20.143984    1601 out.go:97] Starting "download-only-697000" primary control-plane node in "download-only-697000" cluster
	I0805 15:47:20.143993    1601 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:20.201225    1601 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 15:47:20.201254    1601 cache.go:56] Caching tarball of preloaded images
	I0805 15:47:20.201476    1601 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:20.205741    1601 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 15:47:20.205751    1601 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:20.281013    1601 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 15:47:27.835389    1601 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:27.835543    1601 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 15:47:28.357187    1601 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 15:47:28.357397    1601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-697000/config.json ...
	I0805 15:47:28.357412    1601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/download-only-697000/config.json: {Name:mk4424fbb61d1b2d22a194f8fd602146ff512c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:28.357663    1601 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:28.357786    1601 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1054/.minikube/cache/darwin/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-697000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-697000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-697000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-120000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-120000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-120000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-299000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-299000: exit status 85 (59.369041ms)

                                                
                                                
-- stdout --
	* Profile "addons-299000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-299000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-299000: exit status 85 (55.520375ms)

                                                
                                                
-- stdout --
	* Profile "addons-299000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (207.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-299000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-299000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m27.709988292s)
--- PASS: TestAddons/Setup (207.71s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.97s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.726167ms
addons_test.go:897: volcano-scheduler stabilized in 7.759167ms
addons_test.go:905: volcano-admission stabilized in 7.767125ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-sl9ln" [1b0c7c67-07e0-423d-96e3-6088b92fd875] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003785375s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-f95ss" [d56422ea-11cf-4584-a83f-f946f64d5c01] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00381775s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-rlkdv" [a0fad876-3882-4398-91ed-d4b174b878f9] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003869459s
addons_test.go:932: (dbg) Run:  kubectl --context addons-299000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-299000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-299000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f109420e-fd31-4928-b8ea-5a6212d19407] Pending
helpers_test.go:344: "test-job-nginx-0" [f109420e-fd31-4928-b8ea-5a6212d19407] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f109420e-fd31-4928-b8ea-5a6212d19407] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003837667s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable volcano --alsologtostderr -v=1: (9.744472417s)
--- PASS: TestAddons/serial/Volcano (37.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-299000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-299000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.213417ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-l8v4m" [acd427f4-0303-4398-8e62-a3bbb2b43e1f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003941541s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qhg4d" [fda494e7-7701-42d6-b333-678e79f3ceb6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002322625s
addons_test.go:342: (dbg) Run:  kubectl --context addons-299000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-299000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-299000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.414996s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 ip
2024/08/05 15:52:15 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-299000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-299000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-299000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [73401de4-7395-471e-9032-c3b4403079f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [73401de4-7395-471e-9032-c3b4403079f0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003593042s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-299000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable ingress-dns --alsologtostderr -v=1: (1.216427125s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable ingress --alsologtostderr -v=1: (7.199420458s)
--- PASS: TestAddons/parallel/Ingress (17.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-89zlb" [bcd72e2f-99a2-4d04-80a1-7aee25ccc125] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003990125s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-299000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-299000: (5.231291291s)
--- PASS: TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.289375ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-x2h4w" [4ffa7152-0609-4df6-942e-d21caa590ac9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00395625s
addons_test.go:417: (dbg) Run:  kubectl --context addons-299000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.963417ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-299000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-299000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c38eb159-82dc-4901-aa09-78146b36df64] Pending
helpers_test.go:344: "task-pv-pod" [c38eb159-82dc-4901-aa09-78146b36df64] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c38eb159-82dc-4901-aa09-78146b36df64] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002568125s
addons_test.go:590: (dbg) Run:  kubectl --context addons-299000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-299000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-299000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-299000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-299000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-299000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-299000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [02433a43-8139-43e3-975e-6016caa7b640] Pending
helpers_test.go:344: "task-pv-pod-restore" [02433a43-8139-43e3-975e-6016caa7b640] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [02433a43-8139-43e3-975e-6016caa7b640] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003618084s
addons_test.go:632: (dbg) Run:  kubectl --context addons-299000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-299000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-299000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.09736425s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-299000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-bgpbs" [1e43ae34-24b5-494c-a765-fb8f6f521c1e] Pending
helpers_test.go:344: "headlamp-9d868696f-bgpbs" [1e43ae34-24b5-494c-a765-fb8f6f521c1e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-bgpbs" [1e43ae34-24b5-494c-a765-fb8f6f521c1e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003939125s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable headlamp --alsologtostderr -v=1: (5.197546958s)
--- PASS: TestAddons/parallel/Headlamp (15.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-cbbrh" [6c5c9d2d-cea0-49eb-990e-6b7e249408d3] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004197958s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-299000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-299000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-299000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3bb40a35-2d83-4d41-a571-e74e959e05f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3bb40a35-2d83-4d41-a571-e74e959e05f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3bb40a35-2d83-4d41-a571-e74e959e05f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00405475s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-299000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 ssh "cat /opt/local-path-provisioner/pvc-4917bdec-67c8-48f2-987a-1ca7bbc6f48c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-299000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-299000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.319592708s)
--- PASS: TestAddons/parallel/LocalPath (51.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g24h9" [ac65d6d9-6740-4280-9e8e-4cd0917e2934] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002900125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-299000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-7k5ff" [714acc86-1d2d-40a0-81c7-d6fe235392d6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003687292s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-299000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-299000 addons disable yakd --alsologtostderr -v=1: (5.20270775s)
--- PASS: TestAddons/parallel/Yakd (10.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-299000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-299000: (12.196532458s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-299000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-299000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-299000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                    
x
+
TestErrorSpam/setup (33.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-373000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-373000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 --driver=qemu2 : (33.6229855s)
--- PASS: TestErrorSpam/setup (33.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (12.201158084s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (26.063045459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (26.026120917s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19373-1054/.minikube/files/etc/test/nested/copy/1551/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0805 15:56:06.685018    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:06.691943    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:06.704001    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:06.726071    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:06.768164    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:06.850243    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:07.012301    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:07.334386    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:07.976478    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:09.258556    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:11.820624    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:16.941838    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.182279916s)
--- PASS: TestFunctional/serial/StartWithProxy (49.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --alsologtostderr -v=8
E0805 15:56:27.182707    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 15:56:47.664665    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --alsologtostderr -v=8: (35.300368417s)
functional_test.go:659: soft start took 35.3007585s for "functional-280000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-280000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.1: (1.0312375s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1526509559/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add minikube-local-cache-test:functional-280000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache delete minikube-local-cache-test:functional-280000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-280000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (64.783125ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 kubectl -- --context functional-280000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-280000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0805 15:57:28.626380    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.343773084s)
functional_test.go:757: restart took 35.343891333s for "functional-280000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-280000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1956569135/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-280000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-280000: exit status 115 (99.347416ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31377 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-280000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 config get cpus: exit status 14 (32.688209ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 config get cpus: exit status 14 (32.197791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-280000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-280000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2229: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.813667ms)

                                                
                                                
-- stdout --
	* [functional-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 15:58:31.449701    2216 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:58:31.449825    2216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.449828    2216 out.go:304] Setting ErrFile to fd 2...
	I0805 15:58:31.449830    2216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.449975    2216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 15:58:31.450994    2216 out.go:298] Setting JSON to false
	I0805 15:58:31.467374    2216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1682,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:58:31.467453    2216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:58:31.471825    2216 out.go:177] * [functional-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 15:58:31.478657    2216 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 15:58:31.478693    2216 notify.go:220] Checking for updates...
	I0805 15:58:31.485696    2216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:58:31.488650    2216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:58:31.491690    2216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:58:31.494739    2216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 15:58:31.497724    2216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 15:58:31.500977    2216 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 15:58:31.501215    2216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:58:31.504693    2216 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 15:58:31.511711    2216 start.go:297] selected driver: qemu2
	I0805 15:58:31.511716    2216 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:58:31.511765    2216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 15:58:31.517646    2216 out.go:177] 
	W0805 15:58:31.521689    2216 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 15:58:31.525699    2216 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.513375ms)

                                                
                                                
-- stdout --
	* [functional-280000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 15:58:31.336127    2212 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:58:31.336250    2212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.336253    2212 out.go:304] Setting ErrFile to fd 2...
	I0805 15:58:31.336255    2212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:58:31.336373    2212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
	I0805 15:58:31.337840    2212 out.go:298] Setting JSON to false
	I0805 15:58:31.355248    2212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1682,"bootTime":1722897029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 15:58:31.355329    2212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:58:31.358033    2212 out.go:177] * [functional-280000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0805 15:58:31.366821    2212 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 15:58:31.366898    2212 notify.go:220] Checking for updates...
	I0805 15:58:31.373696    2212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	I0805 15:58:31.376774    2212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 15:58:31.379700    2212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:58:31.382698    2212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	I0805 15:58:31.385720    2212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 15:58:31.388983    2212 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 15:58:31.389249    2212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:58:31.393738    2212 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0805 15:58:31.399686    2212 start.go:297] selected driver: qemu2
	I0805 15:58:31.399694    2212 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:58:31.399741    2212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 15:58:31.405729    2212 out.go:177] 
	W0805 15:58:31.409749    2212 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 15:58:31.413687    2212 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [df83ad21-93f7-4075-b497-9bf9bf10befa] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004570917s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-280000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-280000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9b86ca7-1aac-4688-800d-ebec2ed79269] Pending
helpers_test.go:344: "sp-pod" [d9b86ca7-1aac-4688-800d-ebec2ed79269] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9b86ca7-1aac-4688-800d-ebec2ed79269] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004251125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-280000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-280000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6f2cb084-910f-4c7a-9eac-41848c11c437] Pending
helpers_test.go:344: "sp-pod" [6f2cb084-910f-4c7a-9eac-41848c11c437] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6f2cb084-910f-4c7a-9eac-41848c11c437] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0027985s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-280000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp functional-280000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1782394632/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1551/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/test/nested/copy/1551/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1551.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/1551.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1551.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /usr/share/ca-certificates/1551.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/15512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /usr/share/ca-certificates/15512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-280000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "sudo systemctl is-active crio": exit status 1 (118.589708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2070: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fb927fa2-8387-4abb-a7cf-8dc4b4cc6b8c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fb927fa2-8387-4abb-a7cf-8dc4b4cc6b8c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003550625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-280000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.36.225 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-280000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-280000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-gvtp2" [fe58daea-77a5-4fb3-a6b2-9c981077fa30] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-gvtp2" [fe58daea-77a5-4fb3-a6b2-9c981077fa30] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004245458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service list -o json
functional_test.go:1490: Took "273.980958ms" to run "out/minikube-darwin-arm64 -p functional-280000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30110
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30110
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "83.19475ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.572166ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "82.001167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.52225ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2650244257/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722898703070171000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2650244257/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722898703070171000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2650244257/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722898703070171000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2650244257/001/test-1722898703070171000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.4695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 22:58 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 22:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 22:58 test-1722898703070171000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh cat /mount-9p/test-1722898703070171000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-280000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5f8d0f0a-86db-4387-9898-7c981912b13e] Pending
helpers_test.go:344: "busybox-mount" [5f8d0f0a-86db-4387-9898-7c981912b13e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5f8d0f0a-86db-4387-9898-7c981912b13e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5f8d0f0a-86db-4387-9898-7c981912b13e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0037335s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-280000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2650244257/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3431790094/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.959792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3431790094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p": exit status 1 (57.30475ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-280000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3431790094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1: exit status 1 (64.010959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount3: exit status 1 (53.703916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-280000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1561478653/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-280000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-280000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr:
I0805 15:58:40.065934    2365 out.go:291] Setting OutFile to fd 1 ...
I0805 15:58:40.066076    2365 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.066079    2365 out.go:304] Setting ErrFile to fd 2...
I0805 15:58:40.066082    2365 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.066211    2365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 15:58:40.066650    2365 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.066712    2365 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.067550    2365 ssh_runner.go:195] Run: systemctl --version
I0805 15:58:40.067565    2365 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/functional-280000/id_rsa Username:docker}
I0805 15:58:40.090655    2365 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-280000 | 23a44f315ab4f | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kicbase/echo-server               | functional-280000 | ce2d2cda2d858 | 4.78MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr:
I0805 15:58:40.197929    2369 out.go:291] Setting OutFile to fd 1 ...
I0805 15:58:40.198092    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.198096    2369 out.go:304] Setting ErrFile to fd 2...
I0805 15:58:40.198098    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.198260    2369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 15:58:40.198684    2369 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.198748    2369 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.199550    2369 ssh_runner.go:195] Run: systemctl --version
I0805 15:58:40.199558    2369 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/functional-280000/id_rsa Username:docker}
I0805 15:58:40.223127    2369 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr:
[{"id":"23a44f315ab4f0a8bdb661569b8e8b9b1eccfa48847689afffc05d1c13ef1696","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-280000"],"size":"30"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a5
7c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybo
x:1.28.4-glibc"],"size":"3550000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-280000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr:
I0805 15:58:40.130952    2367 out.go:291] Setting OutFile to fd 1 ...
I0805 15:58:40.131107    2367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.131111    2367 out.go:304] Setting ErrFile to fd 2...
I0805 15:58:40.131113    2367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.131256    2367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 15:58:40.131679    2367 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.131739    2367 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.132626    2367 ssh_runner.go:195] Run: systemctl --version
I0805 15:58:40.132636    2367 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/functional-280000/id_rsa Username:docker}
I0805 15:58:40.155264    2367 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr:
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 23a44f315ab4f0a8bdb661569b8e8b9b1eccfa48847689afffc05d1c13ef1696
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-280000
size: "30"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-280000
size: "4780000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr:
I0805 15:58:40.266745    2371 out.go:291] Setting OutFile to fd 1 ...
I0805 15:58:40.266912    2371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.266916    2371 out.go:304] Setting ErrFile to fd 2...
I0805 15:58:40.266919    2371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.267039    2371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 15:58:40.267477    2371 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.267543    2371 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.268436    2371 ssh_runner.go:195] Run: systemctl --version
I0805 15:58:40.268443    2371 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/functional-280000/id_rsa Username:docker}
I0805 15:58:40.291661    2371 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh pgrep buildkitd: exit status 1 (56.703625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr
2024/08/05 15:58:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr: (1.479009334s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr:
I0805 15:58:40.390620    2375 out.go:291] Setting OutFile to fd 1 ...
I0805 15:58:40.391084    2375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.391087    2375 out.go:304] Setting ErrFile to fd 2...
I0805 15:58:40.391089    2375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 15:58:40.391217    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1054/.minikube/bin
I0805 15:58:40.391635    2375 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.392376    2375 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 15:58:40.393269    2375 ssh_runner.go:195] Run: systemctl --version
I0805 15:58:40.393282    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1054/.minikube/machines/functional-280000/id_rsa Username:docker}
I0805 15:58:40.418004    2375 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1629451945.tar
I0805 15:58:40.418057    2375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 15:58:40.422999    2375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1629451945.tar
I0805 15:58:40.425072    2375 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1629451945.tar: stat -c "%s %y" /var/lib/minikube/build/build.1629451945.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1629451945.tar': No such file or directory
I0805 15:58:40.425087    2375 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1629451945.tar --> /var/lib/minikube/build/build.1629451945.tar (3072 bytes)
I0805 15:58:40.433962    2375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1629451945
I0805 15:58:40.437673    2375 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1629451945 -xf /var/lib/minikube/build/build.1629451945.tar
I0805 15:58:40.441279    2375 docker.go:360] Building image: /var/lib/minikube/build/build.1629451945
I0805 15:58:40.441328    2375 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-280000 /var/lib/minikube/build/build.1629451945
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:278fadd17ebddfed85e270c169da0d5468d384e39c4d7da73e3ddb2d41b30536 done
#8 naming to localhost/my-image:functional-280000 done
#8 DONE 0.0s
I0805 15:58:41.828213    2375 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-280000 /var/lib/minikube/build/build.1629451945: (1.386886541s)
I0805 15:58:41.828282    2375 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1629451945
I0805 15:58:41.832433    2375 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1629451945.tar
I0805 15:58:41.835727    2375 build_images.go:217] Built localhost/my-image:functional-280000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1629451945.tar
I0805 15:58:41.835743    2375 build_images.go:133] succeeded building to: functional-280000
I0805 15:58:41.835747    2375 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.731487709s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-280000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon docker.io/kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon docker.io/kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-280000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon docker.io/kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image save docker.io/kicbase/echo-server:functional-280000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image rm docker.io/kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-280000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image save --daemon docker.io/kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-280000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-280000 docker-env) && out/minikube-darwin-arm64 status -p functional-280000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-280000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-280000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-280000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-280000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-949000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0805 15:58:50.547616    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:01:06.681219    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
E0805 16:01:34.371724    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/addons-299000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-949000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m19.081637667s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-949000 -- rollout status deployment/busybox: (4.264129625s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-gqp7w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-mx4gb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-r7qrt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-gqp7w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-mx4gb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-r7qrt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-gqp7w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-mx4gb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-r7qrt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-gqp7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-gqp7w -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-mx4gb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-mx4gb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-r7qrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-949000 -- exec busybox-fc5497c4f-r7qrt -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-949000 -v=7 --alsologtostderr
E0805 16:02:48.897837    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:48.904191    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:48.916282    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:48.938420    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:48.979914    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:49.062006    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:49.224081    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:49.546147    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:50.186797    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:51.468929    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:54.030689    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:02:59.152768    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-949000 -v=7 --alsologtostderr: (56.023679083s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-949000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp testdata/cp-test.txt ha-949000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1013537704/001/cp-test_ha-949000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000:/home/docker/cp-test.txt ha-949000-m02:/home/docker/cp-test_ha-949000_ha-949000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test_ha-949000_ha-949000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000:/home/docker/cp-test.txt ha-949000-m03:/home/docker/cp-test_ha-949000_ha-949000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test_ha-949000_ha-949000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000:/home/docker/cp-test.txt ha-949000-m04:/home/docker/cp-test_ha-949000_ha-949000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test_ha-949000_ha-949000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp testdata/cp-test.txt ha-949000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1013537704/001/cp-test_ha-949000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m02:/home/docker/cp-test.txt ha-949000:/home/docker/cp-test_ha-949000-m02_ha-949000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test_ha-949000-m02_ha-949000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m02:/home/docker/cp-test.txt ha-949000-m03:/home/docker/cp-test_ha-949000-m02_ha-949000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test_ha-949000-m02_ha-949000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m02:/home/docker/cp-test.txt ha-949000-m04:/home/docker/cp-test_ha-949000-m02_ha-949000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test_ha-949000-m02_ha-949000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp testdata/cp-test.txt ha-949000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1013537704/001/cp-test_ha-949000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m03:/home/docker/cp-test.txt ha-949000:/home/docker/cp-test_ha-949000-m03_ha-949000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test_ha-949000-m03_ha-949000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m03:/home/docker/cp-test.txt ha-949000-m02:/home/docker/cp-test_ha-949000-m03_ha-949000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test_ha-949000-m03_ha-949000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m03:/home/docker/cp-test.txt ha-949000-m04:/home/docker/cp-test_ha-949000-m03_ha-949000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test_ha-949000-m03_ha-949000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp testdata/cp-test.txt ha-949000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1013537704/001/cp-test_ha-949000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m04:/home/docker/cp-test.txt ha-949000:/home/docker/cp-test_ha-949000-m04_ha-949000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000 "sudo cat /home/docker/cp-test_ha-949000-m04_ha-949000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m04:/home/docker/cp-test.txt ha-949000-m02:/home/docker/cp-test_ha-949000-m04_ha-949000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m02 "sudo cat /home/docker/cp-test_ha-949000-m04_ha-949000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 cp ha-949000-m04:/home/docker/cp-test.txt ha-949000-m03:/home/docker/cp-test_ha-949000-m04_ha-949000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-949000 ssh -n ha-949000-m03 "sudo cat /home/docker/cp-test_ha-949000-m04_ha-949000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0805 16:17:48.869285    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
E0805 16:19:11.932601    1551 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1054/.minikube/profiles/functional-280000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.079226792s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-500000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-500000 --output=json --user=testUser: (3.952159333s)
--- PASS: TestJSONOutput/stop/Command (3.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-215000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-215000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.763667ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c37b5292-aa67-4531-bbba-7710764fa5d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-215000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ab238aa-d97f-4a9d-a683-1352d44745b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"51d979a0-ad65-49bd-8be4-6fce6f18828e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig"}}
	{"specversion":"1.0","id":"95fb2fae-66ea-43ff-9466-d204e3d163c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4c04dd59-850c-432f-a828-7dfed840c0d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4241dcd-d03c-46c8-935c-8e1275e6cdf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube"}}
	{"specversion":"1.0","id":"570b5f2a-2ebc-4eb6-a5ee-3537e068fb80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c557ad16-b617-4a35-81cb-fb1e55e24fb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-215000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-215000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-229000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.560375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-229000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1054/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1054/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-229000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-229000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.485292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-229000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.56888525s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.755592084s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-229000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-229000: (3.360090334s)
--- PASS: TestNoKubernetes/serial/Stop (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-229000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-229000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.401292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-229000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-596000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-238000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-238000 --alsologtostderr -v=3: (3.600096333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-238000 -n old-k8s-version-238000: exit status 7 (56.444208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-238000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-265000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-265000 --alsologtostderr -v=3: (2.851495166s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (58.473958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-265000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-842000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-842000 --alsologtostderr -v=3: (3.290609083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-842000 -n embed-certs-842000: exit status 7 (54.607292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-842000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-624000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-624000 --alsologtostderr -v=3: (3.324551166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-624000 -n default-k8s-diff-port-624000: exit status 7 (61.863834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-624000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-608000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-608000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-608000 --alsologtostderr -v=3: (3.711222166s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-608000 -n newest-cni-608000: exit status 7 (69.000542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-608000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-364000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-364000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-364000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-364000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364000"

                                                
                                                
----------------------- debugLogs end: cilium-364000 [took: 2.186928083s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-364000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-364000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-629000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard