Test Report: QEMU_macOS 19389

                    
                      4e9c16444aca391b349fd87cc48c80a0a38d518e:2024-08-07:35690
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.66
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.89
36 TestAddons/Setup 10.17
37 TestCertOptions 10.16
38 TestCertExpiration 195.36
39 TestDockerFlags 10.08
40 TestForceSystemdFlag 10.07
41 TestForceSystemdEnv 11.23
47 TestErrorSpam/setup 9.9
56 TestFunctional/serial/StartWithProxy 10.03
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.11
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 93.78
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.44
150 TestMultiControlPlane/serial/StartCluster 10.08
151 TestMultiControlPlane/serial/DeployApp 72.17
152 TestMultiControlPlane/serial/PingHostFromPods 0.08
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.07
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.7
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.21
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
164 TestMultiControlPlane/serial/StopCluster 3.95
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.91
174 TestJSONOutput/start/Command 9.95
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.12
206 TestMountStart/serial/StartWithMountFirst 10.26
209 TestMultiNode/serial/FreshStart2Nodes 9.83
210 TestMultiNode/serial/DeployApp2Nodes 113.5
211 TestMultiNode/serial/PingHostFrom2Pods 0.08
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 51.19
218 TestMultiNode/serial/RestartKeepsNodes 8.73
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 2.25
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.37
226 TestPreload 10.2
228 TestScheduledStopUnix 10.16
229 TestSkaffold 13.12
232 TestRunningBinaryUpgrade 593.71
234 TestKubernetesUpgrade 18.86
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.26
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.42
250 TestStoppedBinaryUpgrade/Upgrade 571.84
252 TestPause/serial/Start 9.88
262 TestNoKubernetes/serial/StartWithK8s 9.94
263 TestNoKubernetes/serial/StartWithStopK8s 5.32
264 TestNoKubernetes/serial/Start 5.31
268 TestNoKubernetes/serial/StartNoArgs 5.32
270 TestNetworkPlugins/group/auto/Start 9.94
271 TestNetworkPlugins/group/calico/Start 9.83
272 TestNetworkPlugins/group/custom-flannel/Start 9.8
273 TestNetworkPlugins/group/false/Start 9.78
274 TestNetworkPlugins/group/kindnet/Start 9.81
275 TestNetworkPlugins/group/flannel/Start 9.81
276 TestNetworkPlugins/group/enable-default-cni/Start 9.83
277 TestNetworkPlugins/group/bridge/Start 9.78
278 TestNetworkPlugins/group/kubenet/Start 10.08
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.82
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.88
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.26
299 TestStartStop/group/embed-certs/serial/FirstStart 10.47
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/embed-certs/serial/SecondStart 6.28
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 9.97
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (30.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-143000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-143000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (30.661460584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b275c78-35f4-47bc-9e31-7da01d4bdf88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-143000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1fde1a5-1f3d-430b-b8c7-f030344ffeec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"4807e6a0-3bc7-45f1-bba4-abd5cd444cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig"}}
	{"specversion":"1.0","id":"d897cc9d-7622-4c50-885e-de07da6a6d8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7b5382a7-0bb1-4aa2-9160-6dd6b4b9fada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"02591686-20f3-4030-a4b9-d613ca7488d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube"}}
	{"specversion":"1.0","id":"7a823f10-1864-4740-b65a-4ea3c790cc5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"df650466-3b8a-403e-b125-9737828b00dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aef9336-cae1-4940-9c01-d5be5f055808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0327e656-bef2-4397-a1e5-70e60d4e0df6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6883e0c-12d0-4a6a-b63f-c082a75ebe62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-143000\" primary control-plane node in \"download-only-143000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"492f69ff-3ca8-43f4-a8d4-f9dfdffa1731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f98eba0-6736-49ed-b444-a12f421ab47e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20] Decompressors:map[bz2:0x14000893bf0 gz:0x14000893bf8 tar:0x14000893b80 tar.bz2:0x14000893b90 tar.gz:0x14000893ba0 tar.xz:0x14000893bb0 tar.zst:0x14000893bc0 tbz2:0x14000893b90 tgz:0x14
000893ba0 txz:0x14000893bb0 tzst:0x14000893bc0 xz:0x14000893c10 zip:0x14000893c50 zst:0x14000893c18] Getters:map[file:0x14000cea1f0 http:0x14000880460 https:0x14000880500] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"749ea55f-45e1-4e27-a094-51e2eb2c6dea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:45:47.480344    7168 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:45:47.480478    7168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:45:47.480481    7168 out.go:304] Setting ErrFile to fd 2...
	I0807 10:45:47.480483    7168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:45:47.480619    7168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	W0807 10:45:47.480727    7168 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19389-6671/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19389-6671/.minikube/config/config.json: no such file or directory
	I0807 10:45:47.482005    7168 out.go:298] Setting JSON to true
	I0807 10:45:47.498535    7168 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4516,"bootTime":1723048231,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:45:47.498598    7168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:45:47.503965    7168 out.go:97] [download-only-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:45:47.504137    7168 notify.go:220] Checking for updates...
	W0807 10:45:47.504128    7168 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball: no such file or directory
	I0807 10:45:47.506902    7168 out.go:169] MINIKUBE_LOCATION=19389
	I0807 10:45:47.509918    7168 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:45:47.514937    7168 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:45:47.517987    7168 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:45:47.520971    7168 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	W0807 10:45:47.526858    7168 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 10:45:47.527036    7168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:45:47.529809    7168 out.go:97] Using the qemu2 driver based on user configuration
	I0807 10:45:47.529829    7168 start.go:297] selected driver: qemu2
	I0807 10:45:47.529843    7168 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:45:47.529898    7168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:45:47.532864    7168 out.go:169] Automatically selected the socket_vmnet network
	I0807 10:45:47.538266    7168 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0807 10:45:47.538363    7168 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:45:47.538415    7168 cni.go:84] Creating CNI manager for ""
	I0807 10:45:47.538434    7168 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 10:45:47.538489    7168 start.go:340] cluster config:
	{Name:download-only-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:45:47.542397    7168 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:45:47.545884    7168 out.go:97] Downloading VM boot image ...
	I0807 10:45:47.545899    7168 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0807 10:46:00.322530    7168 out.go:97] Starting "download-only-143000" primary control-plane node in "download-only-143000" cluster
	I0807 10:46:00.322558    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:00.380462    7168 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 10:46:00.380482    7168 cache.go:56] Caching tarball of preloaded images
	I0807 10:46:00.380625    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:00.386732    7168 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0807 10:46:00.386738    7168 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:00.474208    7168 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 10:46:17.005233    7168 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:17.005398    7168 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:17.700743    7168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 10:46:17.700940    7168 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/download-only-143000/config.json ...
	I0807 10:46:17.700959    7168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/download-only-143000/config.json: {Name:mk62558161899f20da00983e37b95b9b179e1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:46:17.701247    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:17.701449    7168 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0807 10:46:18.064320    7168 out.go:169] 
	W0807 10:46:18.070273    7168 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20] Decompressors:map[bz2:0x14000893bf0 gz:0x14000893bf8 tar:0x14000893b80 tar.bz2:0x14000893b90 tar.gz:0x14000893ba0 tar.xz:0x14000893bb0 tar.zst:0x14000893bc0 tbz2:0x14000893b90 tgz:0x14000893ba0 txz:0x14000893bb0 tzst:0x14000893bc0 xz:0x14000893c10 zip:0x14000893c50 zst:0x14000893c18] Getters:map[file:0x14000cea1f0 http:0x14000880460 https:0x14000880500] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0807 10:46:18.070295    7168 out_reason.go:110] 
	W0807 10:46:18.077235    7168 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:46:18.081041    7168 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-143000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (30.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-408000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-408000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.736168666s)

                                                
                                                
-- stdout --
	* [offline-docker-408000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-408000" primary control-plane node in "offline-docker-408000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-408000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:57:47.113161    8857 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:57:47.113326    8857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:47.113330    8857 out.go:304] Setting ErrFile to fd 2...
	I0807 10:57:47.113332    8857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:47.113470    8857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:57:47.114680    8857 out.go:298] Setting JSON to false
	I0807 10:57:47.132547    8857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5236,"bootTime":1723048231,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:57:47.132633    8857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:57:47.137335    8857 out.go:177] * [offline-docker-408000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:57:47.145458    8857 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:57:47.145478    8857 notify.go:220] Checking for updates...
	I0807 10:57:47.152344    8857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:57:47.155392    8857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:57:47.158316    8857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:57:47.161329    8857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:57:47.164433    8857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:57:47.167746    8857 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:57:47.167813    8857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:57:47.171373    8857 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:57:47.178362    8857 start.go:297] selected driver: qemu2
	I0807 10:57:47.178373    8857 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:57:47.178381    8857 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:57:47.180497    8857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:57:47.183338    8857 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:57:47.186434    8857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:57:47.186450    8857 cni.go:84] Creating CNI manager for ""
	I0807 10:57:47.186456    8857 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:57:47.186459    8857 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:57:47.186501    8857 start.go:340] cluster config:
	{Name:offline-docker-408000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:57:47.190097    8857 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:47.197364    8857 out.go:177] * Starting "offline-docker-408000" primary control-plane node in "offline-docker-408000" cluster
	I0807 10:57:47.201356    8857 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:57:47.201388    8857 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:57:47.201399    8857 cache.go:56] Caching tarball of preloaded images
	I0807 10:57:47.201477    8857 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:57:47.201482    8857 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:57:47.201548    8857 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/offline-docker-408000/config.json ...
	I0807 10:57:47.201559    8857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/offline-docker-408000/config.json: {Name:mkbbb3f30d6e9f06362a46f67a71ee189fbb2806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:57:47.201791    8857 start.go:360] acquireMachinesLock for offline-docker-408000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:57:47.201825    8857 start.go:364] duration metric: took 24.709µs to acquireMachinesLock for "offline-docker-408000"
	I0807 10:57:47.201835    8857 start.go:93] Provisioning new machine with config: &{Name:offline-docker-408000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:57:47.201880    8857 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:57:47.210402    8857 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:57:47.226426    8857 start.go:159] libmachine.API.Create for "offline-docker-408000" (driver="qemu2")
	I0807 10:57:47.226463    8857 client.go:168] LocalClient.Create starting
	I0807 10:57:47.226550    8857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:57:47.226581    8857 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:47.226592    8857 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:47.226644    8857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:57:47.226667    8857 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:47.226674    8857 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:47.227123    8857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:57:47.382074    8857 main.go:141] libmachine: Creating SSH key...
	I0807 10:57:47.455603    8857 main.go:141] libmachine: Creating Disk image...
	I0807 10:57:47.455612    8857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:57:47.455786    8857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:47.465306    8857 main.go:141] libmachine: STDOUT: 
	I0807 10:57:47.465331    8857 main.go:141] libmachine: STDERR: 
	I0807 10:57:47.465402    8857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2 +20000M
	I0807 10:57:47.474565    8857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:57:47.474615    8857 main.go:141] libmachine: STDERR: 
	I0807 10:57:47.474651    8857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:47.474655    8857 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:57:47.474888    8857 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:57:47.474921    8857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:02:24:94:15:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:47.476820    8857 main.go:141] libmachine: STDOUT: 
	I0807 10:57:47.476838    8857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:57:47.476855    8857 client.go:171] duration metric: took 250.38875ms to LocalClient.Create
	I0807 10:57:49.478908    8857 start.go:128] duration metric: took 2.277036458s to createHost
	I0807 10:57:49.478925    8857 start.go:83] releasing machines lock for "offline-docker-408000", held for 2.277111875s
	W0807 10:57:49.478945    8857 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:49.484843    8857 out.go:177] * Deleting "offline-docker-408000" in qemu2 ...
	W0807 10:57:49.505735    8857 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:49.505745    8857 start.go:729] Will try again in 5 seconds ...
	I0807 10:57:54.507878    8857 start.go:360] acquireMachinesLock for offline-docker-408000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:57:54.508302    8857 start.go:364] duration metric: took 341.166µs to acquireMachinesLock for "offline-docker-408000"
	I0807 10:57:54.508433    8857 start.go:93] Provisioning new machine with config: &{Name:offline-docker-408000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:57:54.508781    8857 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:57:54.515719    8857 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:57:54.564124    8857 start.go:159] libmachine.API.Create for "offline-docker-408000" (driver="qemu2")
	I0807 10:57:54.564176    8857 client.go:168] LocalClient.Create starting
	I0807 10:57:54.564284    8857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:57:54.564343    8857 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:54.564358    8857 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:54.564417    8857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:57:54.564460    8857 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:54.564475    8857 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:54.564980    8857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:57:54.729846    8857 main.go:141] libmachine: Creating SSH key...
	I0807 10:57:54.752859    8857 main.go:141] libmachine: Creating Disk image...
	I0807 10:57:54.752864    8857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:57:54.753075    8857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:54.762183    8857 main.go:141] libmachine: STDOUT: 
	I0807 10:57:54.762201    8857 main.go:141] libmachine: STDERR: 
	I0807 10:57:54.762249    8857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2 +20000M
	I0807 10:57:54.769972    8857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:57:54.769987    8857 main.go:141] libmachine: STDERR: 
	I0807 10:57:54.770002    8857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:54.770007    8857 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:57:54.770015    8857 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:57:54.770052    8857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:db:08:e5:c7:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/offline-docker-408000/disk.qcow2
	I0807 10:57:54.771623    8857 main.go:141] libmachine: STDOUT: 
	I0807 10:57:54.771637    8857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:57:54.771648    8857 client.go:171] duration metric: took 207.466959ms to LocalClient.Create
	I0807 10:57:56.773820    8857 start.go:128] duration metric: took 2.265023333s to createHost
	I0807 10:57:56.773898    8857 start.go:83] releasing machines lock for "offline-docker-408000", held for 2.265588333s
	W0807 10:57:56.774319    8857 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-408000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-408000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:56.787975    8857 out.go:177] 
	W0807 10:57:56.792122    8857 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:57:56.792160    8857 out.go:239] * 
	* 
	W0807 10:57:56.795130    8857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:57:56.804928    8857 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-408000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-07 10:57:56.820507 -0700 PDT m=+729.413568168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-408000 -n offline-docker-408000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-408000 -n offline-docker-408000: exit status 7 (69.574458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-408000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-408000
--- FAIL: TestOffline (9.89s)

                                                
                                    
x
+
TestAddons/Setup (10.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-541000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-541000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.172531792s)

                                                
                                                
-- stdout --
	* [addons-541000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-541000" primary control-plane node in "addons-541000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-541000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:46:48.948366    7300 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:46:48.948516    7300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:48.948519    7300 out.go:304] Setting ErrFile to fd 2...
	I0807 10:46:48.948521    7300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:48.948639    7300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:46:48.949727    7300 out.go:298] Setting JSON to false
	I0807 10:46:48.965680    7300 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4577,"bootTime":1723048231,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:46:48.965754    7300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:46:48.970401    7300 out.go:177] * [addons-541000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:46:48.976349    7300 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:46:48.976407    7300 notify.go:220] Checking for updates...
	I0807 10:46:48.983363    7300 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:46:48.986290    7300 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:46:48.989412    7300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:46:48.992357    7300 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:46:48.993657    7300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:46:48.996571    7300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:46:49.000338    7300 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:46:49.005366    7300 start.go:297] selected driver: qemu2
	I0807 10:46:49.005374    7300 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:46:49.005382    7300 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:46:49.007634    7300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:46:49.010381    7300 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:46:49.013456    7300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:46:49.013481    7300 cni.go:84] Creating CNI manager for ""
	I0807 10:46:49.013490    7300 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:46:49.013502    7300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:46:49.013531    7300 start.go:340] cluster config:
	{Name:addons-541000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:46:49.017100    7300 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:46:49.025423    7300 out.go:177] * Starting "addons-541000" primary control-plane node in "addons-541000" cluster
	I0807 10:46:49.029301    7300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:46:49.029319    7300 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:46:49.029332    7300 cache.go:56] Caching tarball of preloaded images
	I0807 10:46:49.029407    7300 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:46:49.029418    7300 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:46:49.029626    7300 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/addons-541000/config.json ...
	I0807 10:46:49.029637    7300 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/addons-541000/config.json: {Name:mk8b2bdd2c0bb1c8febf344e405b36ffbbeb7c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:46:49.030028    7300 start.go:360] acquireMachinesLock for addons-541000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:46:49.030091    7300 start.go:364] duration metric: took 57.416µs to acquireMachinesLock for "addons-541000"
	I0807 10:46:49.030101    7300 start.go:93] Provisioning new machine with config: &{Name:addons-541000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:46:49.030135    7300 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:46:49.038340    7300 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0807 10:46:49.056114    7300 start.go:159] libmachine.API.Create for "addons-541000" (driver="qemu2")
	I0807 10:46:49.056151    7300 client.go:168] LocalClient.Create starting
	I0807 10:46:49.056282    7300 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:46:49.243223    7300 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:46:49.333116    7300 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:46:49.592868    7300 main.go:141] libmachine: Creating SSH key...
	I0807 10:46:49.673390    7300 main.go:141] libmachine: Creating Disk image...
	I0807 10:46:49.673399    7300 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:46:49.673629    7300 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:49.682912    7300 main.go:141] libmachine: STDOUT: 
	I0807 10:46:49.682932    7300 main.go:141] libmachine: STDERR: 
	I0807 10:46:49.682982    7300 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2 +20000M
	I0807 10:46:49.690767    7300 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:46:49.690781    7300 main.go:141] libmachine: STDERR: 
	I0807 10:46:49.690796    7300 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:49.690801    7300 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:46:49.690824    7300 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:46:49.690851    7300 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:72:9c:f7:27:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:49.692520    7300 main.go:141] libmachine: STDOUT: 
	I0807 10:46:49.692535    7300 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:46:49.692552    7300 client.go:171] duration metric: took 636.312167ms to LocalClient.Create
	I0807 10:46:51.694959    7300 start.go:128] duration metric: took 2.664479375s to createHost
	I0807 10:46:51.695009    7300 start.go:83] releasing machines lock for "addons-541000", held for 2.664585041s
	W0807 10:46:51.695071    7300 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:46:51.707470    7300 out.go:177] * Deleting "addons-541000" in qemu2 ...
	W0807 10:46:51.737334    7300 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:46:51.737356    7300 start.go:729] Will try again in 5 seconds ...
	I0807 10:46:56.740080    7300 start.go:360] acquireMachinesLock for addons-541000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:46:56.740541    7300 start.go:364] duration metric: took 344.708µs to acquireMachinesLock for "addons-541000"
	I0807 10:46:56.740681    7300 start.go:93] Provisioning new machine with config: &{Name:addons-541000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:46:56.740987    7300 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:46:56.751647    7300 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0807 10:46:56.802239    7300 start.go:159] libmachine.API.Create for "addons-541000" (driver="qemu2")
	I0807 10:46:56.802298    7300 client.go:168] LocalClient.Create starting
	I0807 10:46:56.802417    7300 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:46:56.802485    7300 main.go:141] libmachine: Decoding PEM data...
	I0807 10:46:56.802503    7300 main.go:141] libmachine: Parsing certificate...
	I0807 10:46:56.802621    7300 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:46:56.802667    7300 main.go:141] libmachine: Decoding PEM data...
	I0807 10:46:56.802687    7300 main.go:141] libmachine: Parsing certificate...
	I0807 10:46:56.803378    7300 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:46:56.968453    7300 main.go:141] libmachine: Creating SSH key...
	I0807 10:46:57.027580    7300 main.go:141] libmachine: Creating Disk image...
	I0807 10:46:57.027587    7300 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:46:57.027800    7300 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:57.037081    7300 main.go:141] libmachine: STDOUT: 
	I0807 10:46:57.037111    7300 main.go:141] libmachine: STDERR: 
	I0807 10:46:57.037166    7300 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2 +20000M
	I0807 10:46:57.045147    7300 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:46:57.045164    7300 main.go:141] libmachine: STDERR: 
	I0807 10:46:57.045181    7300 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:57.045186    7300 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:46:57.045196    7300 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:46:57.045226    7300 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:a8:e7:c0:e3:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/addons-541000/disk.qcow2
	I0807 10:46:57.046900    7300 main.go:141] libmachine: STDOUT: 
	I0807 10:46:57.046918    7300 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:46:57.046935    7300 client.go:171] duration metric: took 244.612042ms to LocalClient.Create
	I0807 10:46:59.049273    7300 start.go:128] duration metric: took 2.308086708s to createHost
	I0807 10:46:59.049338    7300 start.go:83] releasing machines lock for "addons-541000", held for 2.308606625s
	W0807 10:46:59.049736    7300 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-541000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-541000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:46:59.065100    7300 out.go:177] 
	W0807 10:46:59.069143    7300 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:46:59.069189    7300 out.go:239] * 
	* 
	W0807 10:46:59.071593    7300 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:46:59.080084    7300 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-541000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.17s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-891000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-891000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.90140225s)

                                                
                                                
-- stdout --
	* [cert-options-891000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-891000" primary control-plane node in "cert-options-891000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-891000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-891000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-891000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.940792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-891000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-891000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-891000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-891000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-891000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-891000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.162416ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-891000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-891000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-891000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-891000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-891000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-07 10:58:28.331222 -0700 PDT m=+760.924510168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-891000 -n cert-options-891000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-891000 -n cert-options-891000: exit status 7 (29.864292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-891000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-891000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.984637666s)

                                                
                                                
-- stdout --
	* [cert-expiration-081000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-081000" primary control-plane node in "cert-expiration-081000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-081000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-081000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.223818334s)

                                                
                                                
-- stdout --
	* [cert-expiration-081000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-081000" primary control-plane node in "cert-expiration-081000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-081000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-081000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-081000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-081000" primary control-plane node in "cert-expiration-081000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-081000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-07 11:01:28.320131 -0700 PDT m=+940.914714834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-081000 -n cert-expiration-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-081000 -n cert-expiration-081000: exit status 7 (68.370083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-081000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-081000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-081000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-198000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-198000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.850031667s)

                                                
                                                
-- stdout --
	* [docker-flags-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-198000" primary control-plane node in "docker-flags-198000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-198000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:58:08.222195    9048 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:58:08.222327    9048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:58:08.222331    9048 out.go:304] Setting ErrFile to fd 2...
	I0807 10:58:08.222333    9048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:58:08.222459    9048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:58:08.223537    9048 out.go:298] Setting JSON to false
	I0807 10:58:08.239701    9048 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5257,"bootTime":1723048231,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:58:08.239778    9048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:58:08.246368    9048 out.go:177] * [docker-flags-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:58:08.253282    9048 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:58:08.253348    9048 notify.go:220] Checking for updates...
	I0807 10:58:08.260268    9048 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:58:08.263288    9048 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:58:08.266279    9048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:58:08.269271    9048 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:58:08.272244    9048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:58:08.275561    9048 config.go:182] Loaded profile config "force-systemd-flag-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:58:08.275634    9048 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:58:08.275672    9048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:58:08.279274    9048 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:58:08.286272    9048 start.go:297] selected driver: qemu2
	I0807 10:58:08.286279    9048 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:58:08.286287    9048 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:58:08.288591    9048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:58:08.291266    9048 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:58:08.292521    9048 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0807 10:58:08.292534    9048 cni.go:84] Creating CNI manager for ""
	I0807 10:58:08.292542    9048 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:58:08.292546    9048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:58:08.292575    9048 start.go:340] cluster config:
	{Name:docker-flags-198000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:58:08.296223    9048 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:58:08.303287    9048 out.go:177] * Starting "docker-flags-198000" primary control-plane node in "docker-flags-198000" cluster
	I0807 10:58:08.307224    9048 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:58:08.307242    9048 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:58:08.307254    9048 cache.go:56] Caching tarball of preloaded images
	I0807 10:58:08.307347    9048 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:58:08.307353    9048 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:58:08.307409    9048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/docker-flags-198000/config.json ...
	I0807 10:58:08.307421    9048 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/docker-flags-198000/config.json: {Name:mkc8b65933cb6407a55e0074521556863bce2af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:58:08.307648    9048 start.go:360] acquireMachinesLock for docker-flags-198000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:58:08.307685    9048 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "docker-flags-198000"
	I0807 10:58:08.307696    9048 start.go:93] Provisioning new machine with config: &{Name:docker-flags-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:58:08.307749    9048 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:58:08.315249    9048 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:58:08.332936    9048 start.go:159] libmachine.API.Create for "docker-flags-198000" (driver="qemu2")
	I0807 10:58:08.332971    9048 client.go:168] LocalClient.Create starting
	I0807 10:58:08.333030    9048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:58:08.333064    9048 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:08.333074    9048 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:08.333113    9048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:58:08.333137    9048 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:08.333145    9048 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:08.333500    9048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:58:08.488883    9048 main.go:141] libmachine: Creating SSH key...
	I0807 10:58:08.552728    9048 main.go:141] libmachine: Creating Disk image...
	I0807 10:58:08.552733    9048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:58:08.552954    9048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:08.562058    9048 main.go:141] libmachine: STDOUT: 
	I0807 10:58:08.562076    9048 main.go:141] libmachine: STDERR: 
	I0807 10:58:08.562122    9048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2 +20000M
	I0807 10:58:08.569896    9048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:58:08.569914    9048 main.go:141] libmachine: STDERR: 
	I0807 10:58:08.569931    9048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:08.569936    9048 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:58:08.569948    9048 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:58:08.569983    9048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:55:97:93:cd:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:08.571607    9048 main.go:141] libmachine: STDOUT: 
	I0807 10:58:08.571622    9048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:58:08.571645    9048 client.go:171] duration metric: took 238.668375ms to LocalClient.Create
	I0807 10:58:10.572329    9048 start.go:128] duration metric: took 2.264577792s to createHost
	I0807 10:58:10.572374    9048 start.go:83] releasing machines lock for "docker-flags-198000", held for 2.264695917s
	W0807 10:58:10.572415    9048 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:10.597547    9048 out.go:177] * Deleting "docker-flags-198000" in qemu2 ...
	W0807 10:58:10.619241    9048 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:10.619259    9048 start.go:729] Will try again in 5 seconds ...
	I0807 10:58:15.621421    9048 start.go:360] acquireMachinesLock for docker-flags-198000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:58:15.621910    9048 start.go:364] duration metric: took 367.542µs to acquireMachinesLock for "docker-flags-198000"
	I0807 10:58:15.622061    9048 start.go:93] Provisioning new machine with config: &{Name:docker-flags-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:58:15.622407    9048 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:58:15.631970    9048 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:58:15.685211    9048 start.go:159] libmachine.API.Create for "docker-flags-198000" (driver="qemu2")
	I0807 10:58:15.685279    9048 client.go:168] LocalClient.Create starting
	I0807 10:58:15.685385    9048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:58:15.685443    9048 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:15.685459    9048 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:15.685526    9048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:58:15.685569    9048 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:15.685580    9048 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:15.686161    9048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:58:15.859443    9048 main.go:141] libmachine: Creating SSH key...
	I0807 10:58:15.977205    9048 main.go:141] libmachine: Creating Disk image...
	I0807 10:58:15.977218    9048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:58:15.977415    9048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:15.986650    9048 main.go:141] libmachine: STDOUT: 
	I0807 10:58:15.986672    9048 main.go:141] libmachine: STDERR: 
	I0807 10:58:15.986739    9048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2 +20000M
	I0807 10:58:15.994841    9048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:58:15.994854    9048 main.go:141] libmachine: STDERR: 
	I0807 10:58:15.994871    9048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:15.994875    9048 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:58:15.994887    9048 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:58:15.994912    9048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a7:df:c2:78:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/docker-flags-198000/disk.qcow2
	I0807 10:58:15.996471    9048 main.go:141] libmachine: STDOUT: 
	I0807 10:58:15.996487    9048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:58:15.996499    9048 client.go:171] duration metric: took 311.218042ms to LocalClient.Create
	I0807 10:58:17.998673    9048 start.go:128] duration metric: took 2.376245875s to createHost
	I0807 10:58:17.998754    9048 start.go:83] releasing machines lock for "docker-flags-198000", held for 2.37683475s
	W0807 10:58:17.999217    9048 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-198000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-198000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:18.013874    9048 out.go:177] 
	W0807 10:58:18.017943    9048 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:58:18.017981    9048 out.go:239] * 
	* 
	W0807 10:58:18.020595    9048 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:58:18.030745    9048 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-198000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-198000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-198000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.79125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-198000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-198000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-198000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-198000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-198000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-198000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-198000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-198000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-198000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.511084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-198000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-198000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-198000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-198000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-198000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-198000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-07 10:58:18.17093 -0700 PDT m=+750.764144251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-198000 -n docker-flags-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-198000 -n docker-flags-198000: exit status 7 (28.518792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-198000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-198000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-198000
--- FAIL: TestDockerFlags (10.08s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-880000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-880000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.882568375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-880000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-880000" primary control-plane node in "force-systemd-flag-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:58:03.061321    9027 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:58:03.061476    9027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:58:03.061479    9027 out.go:304] Setting ErrFile to fd 2...
	I0807 10:58:03.061481    9027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:58:03.061616    9027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:58:03.062713    9027 out.go:298] Setting JSON to false
	I0807 10:58:03.078550    9027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5252,"bootTime":1723048231,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:58:03.078625    9027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:58:03.085069    9027 out.go:177] * [force-systemd-flag-880000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:58:03.091981    9027 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:58:03.092024    9027 notify.go:220] Checking for updates...
	I0807 10:58:03.099997    9027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:58:03.102962    9027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:58:03.105985    9027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:58:03.108981    9027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:58:03.110505    9027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:58:03.114353    9027 config.go:182] Loaded profile config "force-systemd-env-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:58:03.114432    9027 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:58:03.114482    9027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:58:03.117952    9027 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:58:03.122972    9027 start.go:297] selected driver: qemu2
	I0807 10:58:03.122978    9027 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:58:03.122984    9027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:58:03.125134    9027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:58:03.134948    9027 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:58:03.138081    9027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:58:03.138124    9027 cni.go:84] Creating CNI manager for ""
	I0807 10:58:03.138135    9027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:58:03.138139    9027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:58:03.138173    9027 start.go:340] cluster config:
	{Name:force-systemd-flag-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:58:03.142092    9027 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:58:03.150002    9027 out.go:177] * Starting "force-systemd-flag-880000" primary control-plane node in "force-systemd-flag-880000" cluster
	I0807 10:58:03.154087    9027 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:58:03.154105    9027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:58:03.154124    9027 cache.go:56] Caching tarball of preloaded images
	I0807 10:58:03.154191    9027 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:58:03.154197    9027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:58:03.154262    9027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/force-systemd-flag-880000/config.json ...
	I0807 10:58:03.154274    9027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/force-systemd-flag-880000/config.json: {Name:mkf9f51c4f6ed00c3ce640a498d09e98aa7b11ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:58:03.154501    9027 start.go:360] acquireMachinesLock for force-systemd-flag-880000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:58:03.154537    9027 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "force-systemd-flag-880000"
	I0807 10:58:03.154549    9027 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:58:03.154579    9027 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:58:03.163008    9027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:58:03.181154    9027 start.go:159] libmachine.API.Create for "force-systemd-flag-880000" (driver="qemu2")
	I0807 10:58:03.181182    9027 client.go:168] LocalClient.Create starting
	I0807 10:58:03.181245    9027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:58:03.181279    9027 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:03.181289    9027 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:03.181325    9027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:58:03.181349    9027 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:03.181359    9027 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:03.181783    9027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:58:03.337330    9027 main.go:141] libmachine: Creating SSH key...
	I0807 10:58:03.498831    9027 main.go:141] libmachine: Creating Disk image...
	I0807 10:58:03.498838    9027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:58:03.499055    9027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:03.508716    9027 main.go:141] libmachine: STDOUT: 
	I0807 10:58:03.508741    9027 main.go:141] libmachine: STDERR: 
	I0807 10:58:03.508797    9027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2 +20000M
	I0807 10:58:03.516659    9027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:58:03.516672    9027 main.go:141] libmachine: STDERR: 
	I0807 10:58:03.516687    9027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:03.516691    9027 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:58:03.516704    9027 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:58:03.516729    9027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:20:a3:05:8d:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:03.518328    9027 main.go:141] libmachine: STDOUT: 
	I0807 10:58:03.518345    9027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:58:03.518362    9027 client.go:171] duration metric: took 337.177083ms to LocalClient.Create
	I0807 10:58:05.520536    9027 start.go:128] duration metric: took 2.365952625s to createHost
	I0807 10:58:05.520602    9027 start.go:83] releasing machines lock for "force-systemd-flag-880000", held for 2.3660715s
	W0807 10:58:05.520657    9027 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:05.540702    9027 out.go:177] * Deleting "force-systemd-flag-880000" in qemu2 ...
	W0807 10:58:05.562205    9027 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:05.562223    9027 start.go:729] Will try again in 5 seconds ...
	I0807 10:58:10.564473    9027 start.go:360] acquireMachinesLock for force-systemd-flag-880000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:58:10.572450    9027 start.go:364] duration metric: took 7.861333ms to acquireMachinesLock for "force-systemd-flag-880000"
	I0807 10:58:10.572635    9027 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:58:10.572898    9027 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:58:10.587596    9027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:58:10.636860    9027 start.go:159] libmachine.API.Create for "force-systemd-flag-880000" (driver="qemu2")
	I0807 10:58:10.636907    9027 client.go:168] LocalClient.Create starting
	I0807 10:58:10.637044    9027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:58:10.637117    9027 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:10.637137    9027 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:10.637203    9027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:58:10.637249    9027 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:10.637262    9027 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:10.638006    9027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:58:10.804268    9027 main.go:141] libmachine: Creating SSH key...
	I0807 10:58:10.835947    9027 main.go:141] libmachine: Creating Disk image...
	I0807 10:58:10.835952    9027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:58:10.836157    9027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:10.845358    9027 main.go:141] libmachine: STDOUT: 
	I0807 10:58:10.845374    9027 main.go:141] libmachine: STDERR: 
	I0807 10:58:10.845422    9027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2 +20000M
	I0807 10:58:10.853273    9027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:58:10.853291    9027 main.go:141] libmachine: STDERR: 
	I0807 10:58:10.853302    9027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:10.853305    9027 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:58:10.853314    9027 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:58:10.853339    9027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:e3:c0:6d:b6:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-flag-880000/disk.qcow2
	I0807 10:58:10.854948    9027 main.go:141] libmachine: STDOUT: 
	I0807 10:58:10.854965    9027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:58:10.854980    9027 client.go:171] duration metric: took 218.065708ms to LocalClient.Create
	I0807 10:58:12.857146    9027 start.go:128] duration metric: took 2.2842005s to createHost
	I0807 10:58:12.857234    9027 start.go:83] releasing machines lock for "force-systemd-flag-880000", held for 2.284761958s
	W0807 10:58:12.857604    9027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:12.873288    9027 out.go:177] 
	W0807 10:58:12.884597    9027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:58:12.884632    9027 out.go:239] * 
	* 
	W0807 10:58:12.887012    9027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:58:12.902292    9027 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-880000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-880000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-880000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.848375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-880000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-880000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-07 10:58:12.998411 -0700 PDT m=+745.591588793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-880000 -n force-systemd-flag-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-880000 -n force-systemd-flag-880000: exit status 7 (33.118334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-880000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-880000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (11.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-875000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-875000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.031883209s)

                                                
                                                
-- stdout --
	* [force-systemd-env-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-875000" primary control-plane node in "force-systemd-env-875000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-875000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:57:56.999298    8993 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:57:56.999465    8993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:56.999468    8993 out.go:304] Setting ErrFile to fd 2...
	I0807 10:57:56.999470    8993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:56.999595    8993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:57:57.000664    8993 out.go:298] Setting JSON to false
	I0807 10:57:57.017051    8993 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5246,"bootTime":1723048231,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:57:57.017125    8993 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:57:57.026168    8993 out.go:177] * [force-systemd-env-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:57:57.033232    8993 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:57:57.033279    8993 notify.go:220] Checking for updates...
	I0807 10:57:57.039207    8993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:57:57.042209    8993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:57:57.043334    8993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:57:57.050172    8993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:57:57.058176    8993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0807 10:57:57.061544    8993 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:57:57.061597    8993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:57:57.066066    8993 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:57:57.073219    8993 start.go:297] selected driver: qemu2
	I0807 10:57:57.073226    8993 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:57:57.073232    8993 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:57:57.075452    8993 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:57:57.081187    8993 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:57:57.085294    8993 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:57:57.085325    8993 cni.go:84] Creating CNI manager for ""
	I0807 10:57:57.085332    8993 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:57:57.085342    8993 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:57:57.085375    8993 start.go:340] cluster config:
	{Name:force-systemd-env-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:57:57.089132    8993 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:57.096215    8993 out.go:177] * Starting "force-systemd-env-875000" primary control-plane node in "force-systemd-env-875000" cluster
	I0807 10:57:57.100256    8993 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:57:57.100272    8993 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:57:57.100279    8993 cache.go:56] Caching tarball of preloaded images
	I0807 10:57:57.100354    8993 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:57:57.100360    8993 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:57:57.100410    8993 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/force-systemd-env-875000/config.json ...
	I0807 10:57:57.100421    8993 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/force-systemd-env-875000/config.json: {Name:mk160f30c9d405e0edf088421cb85582ab248f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:57:57.100706    8993 start.go:360] acquireMachinesLock for force-systemd-env-875000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:57:57.100741    8993 start.go:364] duration metric: took 28.084µs to acquireMachinesLock for "force-systemd-env-875000"
	I0807 10:57:57.100752    8993 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:57:57.100781    8993 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:57:57.109195    8993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:57:57.125817    8993 start.go:159] libmachine.API.Create for "force-systemd-env-875000" (driver="qemu2")
	I0807 10:57:57.125846    8993 client.go:168] LocalClient.Create starting
	I0807 10:57:57.125899    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:57:57.125925    8993 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:57.125935    8993 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:57.125972    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:57:57.125996    8993 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:57.126007    8993 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:57.126372    8993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:57:57.283724    8993 main.go:141] libmachine: Creating SSH key...
	I0807 10:57:57.371561    8993 main.go:141] libmachine: Creating Disk image...
	I0807 10:57:57.371576    8993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:57:57.371777    8993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:57:57.380783    8993 main.go:141] libmachine: STDOUT: 
	I0807 10:57:57.380806    8993 main.go:141] libmachine: STDERR: 
	I0807 10:57:57.380883    8993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2 +20000M
	I0807 10:57:57.389249    8993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:57:57.389268    8993 main.go:141] libmachine: STDERR: 
	I0807 10:57:57.389286    8993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:57:57.389292    8993 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:57:57.389302    8993 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:57:57.389333    8993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a8:79:c9:2c:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:57:57.390997    8993 main.go:141] libmachine: STDOUT: 
	I0807 10:57:57.391013    8993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:57:57.391032    8993 client.go:171] duration metric: took 265.183708ms to LocalClient.Create
	I0807 10:57:59.393216    8993 start.go:128] duration metric: took 2.292425459s to createHost
	I0807 10:57:59.393284    8993 start.go:83] releasing machines lock for "force-systemd-env-875000", held for 2.292551334s
	W0807 10:57:59.393361    8993 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:59.399971    8993 out.go:177] * Deleting "force-systemd-env-875000" in qemu2 ...
	W0807 10:57:59.425082    8993 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:59.425105    8993 start.go:729] Will try again in 5 seconds ...
	I0807 10:58:04.427312    8993 start.go:360] acquireMachinesLock for force-systemd-env-875000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:58:05.520779    8993 start.go:364] duration metric: took 1.093367917s to acquireMachinesLock for "force-systemd-env-875000"
	I0807 10:58:05.520898    8993 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:58:05.521133    8993 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:58:05.529680    8993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0807 10:58:05.577418    8993 start.go:159] libmachine.API.Create for "force-systemd-env-875000" (driver="qemu2")
	I0807 10:58:05.577466    8993 client.go:168] LocalClient.Create starting
	I0807 10:58:05.577592    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:58:05.577651    8993 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:05.577669    8993 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:05.577738    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:58:05.577783    8993 main.go:141] libmachine: Decoding PEM data...
	I0807 10:58:05.577794    8993 main.go:141] libmachine: Parsing certificate...
	I0807 10:58:05.578431    8993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:58:05.746886    8993 main.go:141] libmachine: Creating SSH key...
	I0807 10:58:05.923873    8993 main.go:141] libmachine: Creating Disk image...
	I0807 10:58:05.923880    8993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:58:05.924116    8993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:58:05.933523    8993 main.go:141] libmachine: STDOUT: 
	I0807 10:58:05.933541    8993 main.go:141] libmachine: STDERR: 
	I0807 10:58:05.933592    8993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2 +20000M
	I0807 10:58:05.941434    8993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:58:05.941446    8993 main.go:141] libmachine: STDERR: 
	I0807 10:58:05.941456    8993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:58:05.941460    8993 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:58:05.941473    8993 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:58:05.941500    8993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:f9:b3:54:99:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/force-systemd-env-875000/disk.qcow2
	I0807 10:58:05.943114    8993 main.go:141] libmachine: STDOUT: 
	I0807 10:58:05.943137    8993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:58:05.943149    8993 client.go:171] duration metric: took 365.681541ms to LocalClient.Create
	I0807 10:58:07.945339    8993 start.go:128] duration metric: took 2.424196458s to createHost
	I0807 10:58:07.945390    8993 start.go:83] releasing machines lock for "force-systemd-env-875000", held for 2.42459275s
	W0807 10:58:07.945734    8993 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:58:07.962436    8993 out.go:177] 
	W0807 10:58:07.972355    8993 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:58:07.972398    8993 out.go:239] * 
	* 
	W0807 10:58:07.975110    8993 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:58:07.986127    8993 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-875000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-875000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-875000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.681042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-875000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-875000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-07 10:58:08.085602 -0700 PDT m=+740.678744251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-875000 -n force-systemd-env-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-875000 -n force-systemd-env-875000: exit status 7 (34.08525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-875000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-875000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-875000
--- FAIL: TestForceSystemdEnv (11.23s)

                                                
                                    
x
+
TestErrorSpam/setup (9.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-074000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-074000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 --driver=qemu2 : exit status 80 (9.897908333s)

                                                
                                                
-- stdout --
	* [nospam-074000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-074000" primary control-plane node in "nospam-074000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-074000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-074000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-074000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-074000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-074000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19389
- KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-074000" primary control-plane node in "nospam-074000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-074000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-074000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.90s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.957243541s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19389
- KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-908000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (71.004958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.03s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8: exit status 80 (5.185470541s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:47:29.425748    7459 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:47:29.425877    7459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:47:29.425880    7459 out.go:304] Setting ErrFile to fd 2...
	I0807 10:47:29.425882    7459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:47:29.426017    7459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:47:29.427026    7459 out.go:298] Setting JSON to false
	I0807 10:47:29.443085    7459 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4618,"bootTime":1723048231,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:47:29.443154    7459 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:47:29.447862    7459 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:47:29.453764    7459 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:47:29.453828    7459 notify.go:220] Checking for updates...
	I0807 10:47:29.459273    7459 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:47:29.462753    7459 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:47:29.465760    7459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:47:29.468803    7459 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:47:29.471735    7459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:47:29.475065    7459 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:47:29.475133    7459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:47:29.479719    7459 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:47:29.486738    7459 start.go:297] selected driver: qemu2
	I0807 10:47:29.486746    7459 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:47:29.486806    7459 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:47:29.489090    7459 cni.go:84] Creating CNI manager for ""
	I0807 10:47:29.489108    7459 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:47:29.489150    7459 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:47:29.492615    7459 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:47:29.499727    7459 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0807 10:47:29.503800    7459 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:47:29.503816    7459 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:47:29.503825    7459 cache.go:56] Caching tarball of preloaded images
	I0807 10:47:29.503894    7459 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:47:29.503900    7459 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:47:29.503956    7459 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/functional-908000/config.json ...
	I0807 10:47:29.504458    7459 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:47:29.504485    7459 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "functional-908000"
	I0807 10:47:29.504494    7459 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:47:29.504502    7459 fix.go:54] fixHost starting: 
	I0807 10:47:29.504627    7459 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0807 10:47:29.504635    7459 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:47:29.512795    7459 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0807 10:47:29.516578    7459 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:47:29.516613    7459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
	I0807 10:47:29.518814    7459 main.go:141] libmachine: STDOUT: 
	I0807 10:47:29.518832    7459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:47:29.518861    7459 fix.go:56] duration metric: took 14.360833ms for fixHost
	I0807 10:47:29.518865    7459 start.go:83] releasing machines lock for "functional-908000", held for 14.375334ms
	W0807 10:47:29.518872    7459 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:47:29.518905    7459 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:47:29.518909    7459 start.go:729] Will try again in 5 seconds ...
	I0807 10:47:34.519719    7459 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:47:34.520109    7459 start.go:364] duration metric: took 318.334µs to acquireMachinesLock for "functional-908000"
	I0807 10:47:34.520244    7459 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:47:34.520264    7459 fix.go:54] fixHost starting: 
	I0807 10:47:34.520964    7459 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0807 10:47:34.520987    7459 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:47:34.529374    7459 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0807 10:47:34.533541    7459 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:47:34.533828    7459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
	I0807 10:47:34.543203    7459 main.go:141] libmachine: STDOUT: 
	I0807 10:47:34.543265    7459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:47:34.543330    7459 fix.go:56] duration metric: took 23.064666ms for fixHost
	I0807 10:47:34.543347    7459 start.go:83] releasing machines lock for "functional-908000", held for 23.219333ms
	W0807 10:47:34.543561    7459 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:47:34.551490    7459 out.go:177] 
	W0807 10:47:34.555616    7459 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:47:34.555648    7459 out.go:239] * 
	* 
	W0807 10:47:34.558412    7459 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:47:34.565585    7459 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.187022458s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (68.423042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.242833ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-908000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.17075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-908000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-908000 get po -A: exit status 1 (26.318084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-908000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-908000\n"*: args "kubectl --context functional-908000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-908000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.947416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images: exit status 83 (41.884292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.878667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.831875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.957708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods: exit status 1 (698.343459ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.367708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-908000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-908000 get pods: exit status 1 (937.074292ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.242084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.186561209s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.187129875s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (67.972791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.21ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.574167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 logs: exit status 83 (76.369208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |                     |
	|         | -p download-only-143000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -o=json --download-only                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | -p download-only-616000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -o=json --download-only                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | -p download-only-658000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | --download-only -p                                                       | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | binary-mirror-238000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51035                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-238000                                                  | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| addons  | enable dashboard -p                                                      | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | addons-541000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | addons-541000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-541000 --wait=true                                             | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-541000                                                         | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -p nospam-074000 -n=1 --memory=2250 --wait=false                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-074000                                                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
	| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | --context functional-908000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 10:47:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 10:47:39.715670    7534 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:47:39.715799    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:47:39.715801    7534 out.go:304] Setting ErrFile to fd 2...
	I0807 10:47:39.715802    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:47:39.715916    7534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:47:39.716989    7534 out.go:298] Setting JSON to false
	I0807 10:47:39.732668    7534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4628,"bootTime":1723048231,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:47:39.732727    7534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:47:39.737125    7534 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:47:39.746017    7534 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:47:39.746082    7534 notify.go:220] Checking for updates...
	I0807 10:47:39.753126    7534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:47:39.754559    7534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:47:39.758127    7534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:47:39.761114    7534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:47:39.764136    7534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:47:39.767478    7534 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:47:39.767525    7534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:47:39.772102    7534 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:47:39.779059    7534 start.go:297] selected driver: qemu2
	I0807 10:47:39.779062    7534 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:47:39.779109    7534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:47:39.781604    7534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:47:39.781623    7534 cni.go:84] Creating CNI manager for ""
	I0807 10:47:39.781633    7534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:47:39.781677    7534 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:47:39.785345    7534 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:47:39.793087    7534 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0807 10:47:39.797118    7534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:47:39.797131    7534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:47:39.797141    7534 cache.go:56] Caching tarball of preloaded images
	I0807 10:47:39.797195    7534 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:47:39.797199    7534 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:47:39.797258    7534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/functional-908000/config.json ...
	I0807 10:47:39.797733    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:47:39.797763    7534 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "functional-908000"
	I0807 10:47:39.797770    7534 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:47:39.797777    7534 fix.go:54] fixHost starting: 
	I0807 10:47:39.797891    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0807 10:47:39.797897    7534 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:47:39.803054    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0807 10:47:39.815076    7534 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:47:39.815109    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
	I0807 10:47:39.817140    7534 main.go:141] libmachine: STDOUT: 
	I0807 10:47:39.817156    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:47:39.817185    7534 fix.go:56] duration metric: took 19.409625ms for fixHost
	I0807 10:47:39.817189    7534 start.go:83] releasing machines lock for "functional-908000", held for 19.422917ms
	W0807 10:47:39.817193    7534 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:47:39.817233    7534 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:47:39.817238    7534 start.go:729] Will try again in 5 seconds ...
	I0807 10:47:44.819580    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:47:44.820087    7534 start.go:364] duration metric: took 417.209µs to acquireMachinesLock for "functional-908000"
	I0807 10:47:44.820215    7534 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:47:44.820230    7534 fix.go:54] fixHost starting: 
	I0807 10:47:44.820947    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0807 10:47:44.820965    7534 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:47:44.824451    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0807 10:47:44.831316    7534 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:47:44.831651    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
	I0807 10:47:44.841102    7534 main.go:141] libmachine: STDOUT: 
	I0807 10:47:44.841164    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:47:44.841245    7534 fix.go:56] duration metric: took 21.017458ms for fixHost
	I0807 10:47:44.841261    7534 start.go:83] releasing machines lock for "functional-908000", held for 21.157416ms
	W0807 10:47:44.841503    7534 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:47:44.849403    7534 out.go:177] 
	W0807 10:47:44.853337    7534 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:47:44.853369    7534 out.go:239] * 
	W0807 10:47:44.855935    7534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:47:44.863357    7534 out.go:177] 
	
	
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-908000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |                     |
|         | -p download-only-143000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -o=json --download-only                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | -p download-only-616000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -o=json --download-only                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | -p download-only-658000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | --download-only -p                                                       | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | binary-mirror-238000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51035                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-238000                                                  | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| addons  | enable dashboard -p                                                      | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | addons-541000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | addons-541000                                                            |                      |         |         |                     |                     |
| start   | -p addons-541000 --wait=true                                             | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-541000                                                         | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -p nospam-074000 -n=1 --memory=2250 --wait=false                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-074000                                                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --context functional-908000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/07 10:47:39
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0807 10:47:39.715670    7534 out.go:291] Setting OutFile to fd 1 ...
I0807 10:47:39.715799    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:39.715801    7534 out.go:304] Setting ErrFile to fd 2...
I0807 10:47:39.715802    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:39.715916    7534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:47:39.716989    7534 out.go:298] Setting JSON to false
I0807 10:47:39.732668    7534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4628,"bootTime":1723048231,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0807 10:47:39.732727    7534 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0807 10:47:39.737125    7534 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0807 10:47:39.746017    7534 out.go:177]   - MINIKUBE_LOCATION=19389
I0807 10:47:39.746082    7534 notify.go:220] Checking for updates...
I0807 10:47:39.753126    7534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
I0807 10:47:39.754559    7534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0807 10:47:39.758127    7534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0807 10:47:39.761114    7534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
I0807 10:47:39.764136    7534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0807 10:47:39.767478    7534 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:47:39.767525    7534 driver.go:392] Setting default libvirt URI to qemu:///system
I0807 10:47:39.772102    7534 out.go:177] * Using the qemu2 driver based on existing profile
I0807 10:47:39.779059    7534 start.go:297] selected driver: qemu2
I0807 10:47:39.779062    7534 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0807 10:47:39.779109    7534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0807 10:47:39.781604    7534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0807 10:47:39.781623    7534 cni.go:84] Creating CNI manager for ""
I0807 10:47:39.781633    7534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0807 10:47:39.781677    7534 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0807 10:47:39.785345    7534 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0807 10:47:39.793087    7534 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0807 10:47:39.797118    7534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0807 10:47:39.797131    7534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0807 10:47:39.797141    7534 cache.go:56] Caching tarball of preloaded images
I0807 10:47:39.797195    7534 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0807 10:47:39.797199    7534 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0807 10:47:39.797258    7534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/functional-908000/config.json ...
I0807 10:47:39.797733    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0807 10:47:39.797763    7534 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "functional-908000"
I0807 10:47:39.797770    7534 start.go:96] Skipping create...Using existing machine configuration
I0807 10:47:39.797777    7534 fix.go:54] fixHost starting: 
I0807 10:47:39.797891    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0807 10:47:39.797897    7534 fix.go:138] unexpected machine state, will restart: <nil>
I0807 10:47:39.803054    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0807 10:47:39.815076    7534 qemu.go:418] Using hvf for hardware acceleration
I0807 10:47:39.815109    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
I0807 10:47:39.817140    7534 main.go:141] libmachine: STDOUT: 
I0807 10:47:39.817156    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0807 10:47:39.817185    7534 fix.go:56] duration metric: took 19.409625ms for fixHost
I0807 10:47:39.817189    7534 start.go:83] releasing machines lock for "functional-908000", held for 19.422917ms
W0807 10:47:39.817193    7534 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0807 10:47:39.817233    7534 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0807 10:47:39.817238    7534 start.go:729] Will try again in 5 seconds ...
I0807 10:47:44.819580    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0807 10:47:44.820087    7534 start.go:364] duration metric: took 417.209µs to acquireMachinesLock for "functional-908000"
I0807 10:47:44.820215    7534 start.go:96] Skipping create...Using existing machine configuration
I0807 10:47:44.820230    7534 fix.go:54] fixHost starting: 
I0807 10:47:44.820947    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0807 10:47:44.820965    7534 fix.go:138] unexpected machine state, will restart: <nil>
I0807 10:47:44.824451    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0807 10:47:44.831316    7534 qemu.go:418] Using hvf for hardware acceleration
I0807 10:47:44.831651    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
I0807 10:47:44.841102    7534 main.go:141] libmachine: STDOUT: 
I0807 10:47:44.841164    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0807 10:47:44.841245    7534 fix.go:56] duration metric: took 21.017458ms for fixHost
I0807 10:47:44.841261    7534 start.go:83] releasing machines lock for "functional-908000", held for 21.157416ms
W0807 10:47:44.841503    7534 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0807 10:47:44.849403    7534 out.go:177] 
W0807 10:47:44.853337    7534 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0807 10:47:44.853369    7534 out.go:239] * 
W0807 10:47:44.855935    7534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0807 10:47:44.863357    7534 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3553954631/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |                     |
|         | -p download-only-143000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -o=json --download-only                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | -p download-only-616000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -o=json --download-only                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | -p download-only-658000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-143000                                                  | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-616000                                                  | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| delete  | -p download-only-658000                                                  | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | --download-only -p                                                       | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | binary-mirror-238000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51035                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-238000                                                  | binary-mirror-238000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| addons  | enable dashboard -p                                                      | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | addons-541000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | addons-541000                                                            |                      |         |         |                     |                     |
| start   | -p addons-541000 --wait=true                                             | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-541000                                                         | addons-541000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
| start   | -p nospam-074000 -n=1 --memory=2250 --wait=false                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-074000 --log_dir                                                  | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-074000                                                         | nospam-074000        | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT | 07 Aug 24 10:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --context functional-908000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 07 Aug 24 10:47 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/07 10:47:39
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0807 10:47:39.715670    7534 out.go:291] Setting OutFile to fd 1 ...
I0807 10:47:39.715799    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:39.715801    7534 out.go:304] Setting ErrFile to fd 2...
I0807 10:47:39.715802    7534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:39.715916    7534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:47:39.716989    7534 out.go:298] Setting JSON to false
I0807 10:47:39.732668    7534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4628,"bootTime":1723048231,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0807 10:47:39.732727    7534 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0807 10:47:39.737125    7534 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0807 10:47:39.746017    7534 out.go:177]   - MINIKUBE_LOCATION=19389
I0807 10:47:39.746082    7534 notify.go:220] Checking for updates...
I0807 10:47:39.753126    7534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
I0807 10:47:39.754559    7534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0807 10:47:39.758127    7534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0807 10:47:39.761114    7534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
I0807 10:47:39.764136    7534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0807 10:47:39.767478    7534 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:47:39.767525    7534 driver.go:392] Setting default libvirt URI to qemu:///system
I0807 10:47:39.772102    7534 out.go:177] * Using the qemu2 driver based on existing profile
I0807 10:47:39.779059    7534 start.go:297] selected driver: qemu2
I0807 10:47:39.779062    7534 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0807 10:47:39.779109    7534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0807 10:47:39.781604    7534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0807 10:47:39.781623    7534 cni.go:84] Creating CNI manager for ""
I0807 10:47:39.781633    7534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0807 10:47:39.781677    7534 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0807 10:47:39.785345    7534 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0807 10:47:39.793087    7534 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0807 10:47:39.797118    7534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0807 10:47:39.797131    7534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0807 10:47:39.797141    7534 cache.go:56] Caching tarball of preloaded images
I0807 10:47:39.797195    7534 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0807 10:47:39.797199    7534 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0807 10:47:39.797258    7534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/functional-908000/config.json ...
I0807 10:47:39.797733    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0807 10:47:39.797763    7534 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "functional-908000"
I0807 10:47:39.797770    7534 start.go:96] Skipping create...Using existing machine configuration
I0807 10:47:39.797777    7534 fix.go:54] fixHost starting: 
I0807 10:47:39.797891    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0807 10:47:39.797897    7534 fix.go:138] unexpected machine state, will restart: <nil>
I0807 10:47:39.803054    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0807 10:47:39.815076    7534 qemu.go:418] Using hvf for hardware acceleration
I0807 10:47:39.815109    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
I0807 10:47:39.817140    7534 main.go:141] libmachine: STDOUT: 
I0807 10:47:39.817156    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0807 10:47:39.817185    7534 fix.go:56] duration metric: took 19.409625ms for fixHost
I0807 10:47:39.817189    7534 start.go:83] releasing machines lock for "functional-908000", held for 19.422917ms
W0807 10:47:39.817193    7534 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0807 10:47:39.817233    7534 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0807 10:47:39.817238    7534 start.go:729] Will try again in 5 seconds ...
I0807 10:47:44.819580    7534 start.go:360] acquireMachinesLock for functional-908000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0807 10:47:44.820087    7534 start.go:364] duration metric: took 417.209µs to acquireMachinesLock for "functional-908000"
I0807 10:47:44.820215    7534 start.go:96] Skipping create...Using existing machine configuration
I0807 10:47:44.820230    7534 fix.go:54] fixHost starting: 
I0807 10:47:44.820947    7534 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0807 10:47:44.820965    7534 fix.go:138] unexpected machine state, will restart: <nil>
I0807 10:47:44.824451    7534 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0807 10:47:44.831316    7534 qemu.go:418] Using hvf for hardware acceleration
I0807 10:47:44.831651    7534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a6:4e:a5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/functional-908000/disk.qcow2
I0807 10:47:44.841102    7534 main.go:141] libmachine: STDOUT: 
I0807 10:47:44.841164    7534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0807 10:47:44.841245    7534 fix.go:56] duration metric: took 21.017458ms for fixHost
I0807 10:47:44.841261    7534 start.go:83] releasing machines lock for "functional-908000", held for 21.157416ms
W0807 10:47:44.841503    7534 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0807 10:47:44.849403    7534 out.go:177] 
W0807 10:47:44.853337    7534 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0807 10:47:44.853369    7534 out.go:239] * 
W0807 10:47:44.855935    7534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0807 10:47:44.863357    7534 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.150708ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stderr:
I0807 10:48:30.252299    7882 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.252720    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.252729    7882 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.252732    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.252892    7882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.253177    7882 mustload.go:65] Loading cluster: functional-908000
I0807 10:48:30.253369    7882 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.257816    7882 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0807 10:48:30.261785    7882 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (41.81375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status: exit status 7 (29.643917ms)

                                                
                                                
-- stdout --
	functional-908000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-908000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.2165ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -o json: exit status 7 (34.111458ms)

                                                
                                                
-- stdout --
	{"Name":"functional-908000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-908000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (28.621292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.424375ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-908000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-908000 describe po hello-node-connect: exit status 1 (26.442ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-908000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-908000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-908000 logs -l app=hello-node-connect: exit status 1 (26.394ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-908000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-908000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-908000 describe svc hello-node-connect: exit status 1 (25.992333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-908000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.985209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-908000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (30.433792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello": exit status 83 (41.90275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname": exit status 83 (41.910667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-908000"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.407792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.528959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.084792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3721023642/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3721023642/001/cp-test.txt: exit status 83 (48.81275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3721023642/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.908958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3721023642/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.537291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.962ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7166/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7166/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7166/hosts": exit status 83 (39.193708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7166/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (28.911208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7166.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/7166.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/7166.pem": exit status 83 (40.464083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7166.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/7166.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7166.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7166.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/7166.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/7166.pem": exit status 83 (39.807792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7166.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/7166.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7166.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.673833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/71662.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/71662.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/71662.pem": exit status 83 (50.621917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/71662.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/71662.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/71662.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/71662.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/71662.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/71662.pem": exit status 83 (43.539167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/71662.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/71662.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/71662.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.873958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (30.389583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.174209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-908000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.247458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio": exit status 83 (37.502875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 version -o=json --components: exit status 83 (40.898875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:
I0807 10:48:30.652135    7897 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.652293    7897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.652296    7897 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.652298    7897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.652438    7897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.652867    7897 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.652935    7897 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
I0807 10:48:30.871721    7909 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.871857    7909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.871861    7909 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.871863    7909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.871977    7909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.872386    7909 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.872450    7909 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
I0807 10:48:30.836052    7907 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.836226    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.836229    7907 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.836232    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.836358    7907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.836831    7907 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.836894    7907 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
I0807 10:48:30.686823    7899 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.687033    7899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.687040    7899 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.687047    7899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.687162    7899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.687557    7899 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.687620    7899 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd: exit status 83 (41.810291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr:
I0807 10:48:30.764469    7903 out.go:291] Setting OutFile to fd 1 ...
I0807 10:48:30.764956    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.764959    7903 out.go:304] Setting ErrFile to fd 2...
I0807 10:48:30.764962    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:48:30.765144    7903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:48:30.765542    7903 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.765995    7903 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:48:30.766234    7903 build_images.go:133] succeeded building to: 
I0807 10:48:30.766237    7903 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "localhost/my-image:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000": exit status 1 (49.769417ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (41.689208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:48:30.526378    7891 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:48:30.526779    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.526783    7891 out.go:304] Setting ErrFile to fd 2...
	I0807 10:48:30.526786    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.526969    7891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:48:30.527181    7891 mustload.go:65] Loading cluster: functional-908000
	I0807 10:48:30.527379    7891 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:48:30.531709    7891 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0807 10:48:30.535661    7891 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (40.684375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:48:30.610141    7895 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:48:30.610279    7895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.610283    7895 out.go:304] Setting ErrFile to fd 2...
	I0807 10:48:30.610286    7895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.610405    7895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:48:30.610639    7895 mustload.go:65] Loading cluster: functional-908000
	I0807 10:48:30.610848    7895 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:48:30.614776    7895 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0807 10:48:30.618616    7895 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (41.694083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:48:30.568941    7893 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:48:30.569088    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.569091    7893 out.go:304] Setting ErrFile to fd 2...
	I0807 10:48:30.569093    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.569215    7893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:48:30.569439    7893 mustload.go:65] Loading cluster: functional-908000
	I0807 10:48:30.569653    7893 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:48:30.574632    7893 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0807 10:48:30.578646    7893 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.932083ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list: exit status 83 (43.6715ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-908000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list -o json: exit status 83 (41.655458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-908000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node: exit status 83 (42.851542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}: exit status 83 (42.813291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url: exit status 83 (41.701667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1565: failed to parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0807 10:47:46.734801    7658 out.go:291] Setting OutFile to fd 1 ...
I0807 10:47:46.734954    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:46.734958    7658 out.go:304] Setting ErrFile to fd 2...
I0807 10:47:46.734960    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:47:46.735087    7658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:47:46.735322    7658 mustload.go:65] Loading cluster: functional-908000
I0807 10:47:46.735517    7658 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:47:46.740050    7658 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0807 10:47:46.751047    7658 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
stdout: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7657: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-908000": client config: context "functional-908000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-908000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-908000 get svc nginx-svc: exit status 1 (72.435625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-908000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-908000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save docker.io/kicbase/echo-server:functional-908000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.034680084s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-084000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-084000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.026351625s)

                                                
                                                
-- stdout --
	* [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:50:24.515088    7988 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:50:24.515204    7988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:50:24.515207    7988 out.go:304] Setting ErrFile to fd 2...
	I0807 10:50:24.515210    7988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:50:24.515343    7988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:50:24.516450    7988 out.go:298] Setting JSON to false
	I0807 10:50:24.532737    7988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4793,"bootTime":1723048231,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:50:24.532822    7988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:50:24.539267    7988 out.go:177] * [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:50:24.547354    7988 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:50:24.547416    7988 notify.go:220] Checking for updates...
	I0807 10:50:24.554342    7988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:50:24.557328    7988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:50:24.560350    7988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:50:24.563364    7988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:50:24.566395    7988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:50:24.569523    7988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:50:24.574282    7988 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:50:24.581257    7988 start.go:297] selected driver: qemu2
	I0807 10:50:24.581264    7988 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:50:24.581273    7988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:50:24.583518    7988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:50:24.586284    7988 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:50:24.589440    7988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:50:24.589469    7988 cni.go:84] Creating CNI manager for ""
	I0807 10:50:24.589476    7988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 10:50:24.589481    7988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 10:50:24.589528    7988 start.go:340] cluster config:
	{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:50:24.593336    7988 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:50:24.601297    7988 out.go:177] * Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	I0807 10:50:24.605171    7988 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:50:24.605189    7988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:50:24.605200    7988 cache.go:56] Caching tarball of preloaded images
	I0807 10:50:24.605278    7988 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:50:24.605284    7988 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:50:24.605487    7988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/ha-084000/config.json ...
	I0807 10:50:24.605498    7988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/ha-084000/config.json: {Name:mkba6157aae57cc8e0fbd90cc0ebd123a42ce209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:50:24.605857    7988 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:50:24.605890    7988 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "ha-084000"
	I0807 10:50:24.605902    7988 start.go:93] Provisioning new machine with config: &{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:50:24.605934    7988 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:50:24.614309    7988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:50:24.631978    7988 start.go:159] libmachine.API.Create for "ha-084000" (driver="qemu2")
	I0807 10:50:24.632007    7988 client.go:168] LocalClient.Create starting
	I0807 10:50:24.632071    7988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:50:24.632102    7988 main.go:141] libmachine: Decoding PEM data...
	I0807 10:50:24.632112    7988 main.go:141] libmachine: Parsing certificate...
	I0807 10:50:24.632153    7988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:50:24.632177    7988 main.go:141] libmachine: Decoding PEM data...
	I0807 10:50:24.632184    7988 main.go:141] libmachine: Parsing certificate...
	I0807 10:50:24.632556    7988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:50:24.789863    7988 main.go:141] libmachine: Creating SSH key...
	I0807 10:50:25.037329    7988 main.go:141] libmachine: Creating Disk image...
	I0807 10:50:25.037337    7988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:50:25.037599    7988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:25.047328    7988 main.go:141] libmachine: STDOUT: 
	I0807 10:50:25.047350    7988 main.go:141] libmachine: STDERR: 
	I0807 10:50:25.047408    7988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2 +20000M
	I0807 10:50:25.055287    7988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:50:25.055302    7988 main.go:141] libmachine: STDERR: 
	I0807 10:50:25.055318    7988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:25.055321    7988 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:50:25.055331    7988 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:50:25.055372    7988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b8:fb:b1:0c:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:25.056964    7988 main.go:141] libmachine: STDOUT: 
	I0807 10:50:25.056981    7988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:50:25.056997    7988 client.go:171] duration metric: took 424.987625ms to LocalClient.Create
	I0807 10:50:27.059234    7988 start.go:128] duration metric: took 2.453284s to createHost
	I0807 10:50:27.059335    7988 start.go:83] releasing machines lock for "ha-084000", held for 2.453451334s
	W0807 10:50:27.059432    7988 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:50:27.071850    7988 out.go:177] * Deleting "ha-084000" in qemu2 ...
	W0807 10:50:27.101651    7988 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:50:27.101682    7988 start.go:729] Will try again in 5 seconds ...
	I0807 10:50:32.103806    7988 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:50:32.104302    7988 start.go:364] duration metric: took 388.666µs to acquireMachinesLock for "ha-084000"
	I0807 10:50:32.104438    7988 start.go:93] Provisioning new machine with config: &{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:50:32.104667    7988 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:50:32.116354    7988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:50:32.164484    7988 start.go:159] libmachine.API.Create for "ha-084000" (driver="qemu2")
	I0807 10:50:32.164540    7988 client.go:168] LocalClient.Create starting
	I0807 10:50:32.164643    7988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:50:32.164703    7988 main.go:141] libmachine: Decoding PEM data...
	I0807 10:50:32.164717    7988 main.go:141] libmachine: Parsing certificate...
	I0807 10:50:32.164776    7988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:50:32.164830    7988 main.go:141] libmachine: Decoding PEM data...
	I0807 10:50:32.164844    7988 main.go:141] libmachine: Parsing certificate...
	I0807 10:50:32.165459    7988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:50:32.333017    7988 main.go:141] libmachine: Creating SSH key...
	I0807 10:50:32.444003    7988 main.go:141] libmachine: Creating Disk image...
	I0807 10:50:32.444009    7988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:50:32.444203    7988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:32.453542    7988 main.go:141] libmachine: STDOUT: 
	I0807 10:50:32.453558    7988 main.go:141] libmachine: STDERR: 
	I0807 10:50:32.453602    7988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2 +20000M
	I0807 10:50:32.461380    7988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:50:32.461393    7988 main.go:141] libmachine: STDERR: 
	I0807 10:50:32.461404    7988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:32.461409    7988 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:50:32.461417    7988 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:50:32.461443    7988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5e:ac:15:44:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:50:32.462991    7988 main.go:141] libmachine: STDOUT: 
	I0807 10:50:32.463006    7988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:50:32.463016    7988 client.go:171] duration metric: took 298.472375ms to LocalClient.Create
	I0807 10:50:34.465184    7988 start.go:128] duration metric: took 2.360483375s to createHost
	I0807 10:50:34.465272    7988 start.go:83] releasing machines lock for "ha-084000", held for 2.360946375s
	W0807 10:50:34.466005    7988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:50:34.475568    7988 out.go:177] 
	W0807 10:50:34.487971    7988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:50:34.488024    7988 out.go:239] * 
	* 
	W0807 10:50:34.490740    7988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:50:34.501224    7988 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-084000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (53.764458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (72.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.671083ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-084000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- rollout status deployment/busybox: exit status 1 (56.998125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.964333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.554625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.207917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.168ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.687459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.416292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.042542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.551834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.156291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.873042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.268708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.620125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.593ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.6325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.051542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (72.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-084000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.553042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-084000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.182667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-084000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-084000 -v=7 --alsologtostderr: exit status 83 (41.849292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-084000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:46.853444    8099 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:46.854225    8099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:46.854228    8099 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:46.854231    8099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:46.854420    8099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:46.854644    8099 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:46.854825    8099 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:46.859319    8099 out.go:177] * The control-plane node ha-084000 host is not running: state=Stopped
	I0807 10:51:46.863285    8099 out.go:177]   To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-084000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.122416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-084000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-084000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.726791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-084000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-084000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-084000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (30.068083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-084000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-084000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.856166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status --output json -v=7 --alsologtostderr: exit status 7 (29.248375ms)

                                                
                                                
-- stdout --
	{"Name":"ha-084000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:47.056127    8111 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:47.056269    8111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.056272    8111 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:47.056275    8111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.056403    8111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:47.056524    8111 out.go:298] Setting JSON to true
	I0807 10:51:47.056533    8111 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:47.056597    8111 notify.go:220] Checking for updates...
	I0807 10:51:47.056711    8111 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:47.056717    8111 status.go:255] checking status of ha-084000 ...
	I0807 10:51:47.056954    8111 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:47.056958    8111 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:47.056960    8111 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-084000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.384292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.430209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:47.115288    8115 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:47.115777    8115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.115780    8115 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:47.115782    8115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.115918    8115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:47.116164    8115 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:47.116352    8115 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:47.120334    8115 out.go:177] 
	W0807 10:51:47.123369    8115 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0807 10:51:47.123374    8115 out.go:239] * 
	* 
	W0807 10:51:47.125341    8115 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:51:47.129311    8115 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-084000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (29.358833ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:47.162023    8117 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:47.162193    8117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.162197    8117 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:47.162199    8117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.162326    8117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:47.162451    8117 out.go:298] Setting JSON to false
	I0807 10:51:47.162461    8117 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:47.162521    8117 notify.go:220] Checking for updates...
	I0807 10:51:47.162654    8117 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:47.162660    8117 status.go:255] checking status of ha-084000 ...
	I0807 10:51:47.162889    8117 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:47.162893    8117 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:47.162895    8117 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.860083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-084000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (28.9825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.530625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:47.296943    8126 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:47.297343    8126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.297346    8126 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:47.297349    8126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.297547    8126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:47.297768    8126 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:47.297944    8126 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:47.302432    8126 out.go:177] 
	W0807 10:51:47.306270    8126 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0807 10:51:47.306274    8126 out.go:239] * 
	* 
	W0807 10:51:47.308261    8126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:51:47.312281    8126 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0807 10:51:47.296943    8126 out.go:291] Setting OutFile to fd 1 ...
I0807 10:51:47.297343    8126 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:51:47.297346    8126 out.go:304] Setting ErrFile to fd 2...
I0807 10:51:47.297349    8126 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:51:47.297547    8126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:51:47.297768    8126 mustload.go:65] Loading cluster: ha-084000
I0807 10:51:47.297944    8126 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:51:47.302432    8126 out.go:177] 
W0807 10:51:47.306270    8126 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0807 10:51:47.306274    8126 out.go:239] * 
* 
W0807 10:51:47.308261    8126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0807 10:51:47.312281    8126 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-084000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (30.151791ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:47.345688    8128 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:47.345832    8128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.345835    8128 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:47.345837    8128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:47.345969    8128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:47.346087    8128 out.go:298] Setting JSON to false
	I0807 10:51:47.346096    8128 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:47.346156    8128 notify.go:220] Checking for updates...
	I0807 10:51:47.346293    8128 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:47.346299    8128 status.go:255] checking status of ha-084000 ...
	I0807 10:51:47.346494    8128 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:47.346498    8128 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:47.346500    8128 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (73.679542ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:48.470165    8130 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:48.470342    8130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:48.470347    8130 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:48.470355    8130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:48.470523    8130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:48.470692    8130 out.go:298] Setting JSON to false
	I0807 10:51:48.470705    8130 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:48.470743    8130 notify.go:220] Checking for updates...
	I0807 10:51:48.470979    8130 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:48.470987    8130 status.go:255] checking status of ha-084000 ...
	I0807 10:51:48.471264    8130 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:48.471269    8130 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:48.471272    8130 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (74.765ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:50.155633    8132 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:50.155890    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:50.155897    8132 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:50.155901    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:50.156065    8132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:50.156215    8132 out.go:298] Setting JSON to false
	I0807 10:51:50.156228    8132 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:50.156266    8132 notify.go:220] Checking for updates...
	I0807 10:51:50.156497    8132 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:50.156505    8132 status.go:255] checking status of ha-084000 ...
	I0807 10:51:50.156776    8132 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:50.156781    8132 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:50.156784    8132 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (72.676792ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:53.381995    8136 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:53.382171    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:53.382176    8136 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:53.382180    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:53.382359    8136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:53.382501    8136 out.go:298] Setting JSON to false
	I0807 10:51:53.382514    8136 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:53.382544    8136 notify.go:220] Checking for updates...
	I0807 10:51:53.382806    8136 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:53.382816    8136 status.go:255] checking status of ha-084000 ...
	I0807 10:51:53.383090    8136 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:53.383096    8136 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:53.383099    8136 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (75.998583ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:51:58.128154    8142 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:51:58.128314    8142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:58.128318    8142 out.go:304] Setting ErrFile to fd 2...
	I0807 10:51:58.128322    8142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:51:58.128481    8142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:51:58.128634    8142 out.go:298] Setting JSON to false
	I0807 10:51:58.128647    8142 mustload.go:65] Loading cluster: ha-084000
	I0807 10:51:58.128686    8142 notify.go:220] Checking for updates...
	I0807 10:51:58.128938    8142 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:51:58.128946    8142 status.go:255] checking status of ha-084000 ...
	I0807 10:51:58.129208    8142 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:51:58.129215    8142 status.go:343] host is not running, skipping remaining checks
	I0807 10:51:58.129218    8142 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (72.015833ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:00.923284    8144 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:00.923465    8144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:00.923470    8144 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:00.923473    8144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:00.923673    8144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:00.923845    8144 out.go:298] Setting JSON to false
	I0807 10:52:00.923859    8144 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:00.923903    8144 notify.go:220] Checking for updates...
	I0807 10:52:00.924143    8144 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:00.924151    8144 status.go:255] checking status of ha-084000 ...
	I0807 10:52:00.924458    8144 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:00.924463    8144 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:00.924466    8144 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (72.749666ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:08.330114    8150 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:08.330332    8150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:08.330337    8150 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:08.330340    8150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:08.330509    8150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:08.330675    8150 out.go:298] Setting JSON to false
	I0807 10:52:08.330688    8150 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:08.330741    8150 notify.go:220] Checking for updates...
	I0807 10:52:08.330971    8150 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:08.330979    8150 status.go:255] checking status of ha-084000 ...
	I0807 10:52:08.331264    8150 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:08.331269    8150 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:08.331273    8150 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (71.458958ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:18.174844    8161 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:18.175039    8161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:18.175044    8161 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:18.175047    8161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:18.175213    8161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:18.175384    8161 out.go:298] Setting JSON to false
	I0807 10:52:18.175397    8161 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:18.175449    8161 notify.go:220] Checking for updates...
	I0807 10:52:18.175669    8161 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:18.175677    8161 status.go:255] checking status of ha-084000 ...
	I0807 10:52:18.175963    8161 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:18.175968    8161 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:18.175971    8161 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (74.455417ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:41.931734    8173 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:41.931963    8173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:41.931968    8173 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:41.931971    8173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:41.932172    8173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:41.932359    8173 out.go:298] Setting JSON to false
	I0807 10:52:41.932373    8173 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:41.932423    8173 notify.go:220] Checking for updates...
	I0807 10:52:41.932662    8173 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:41.932672    8173 status.go:255] checking status of ha-084000 ...
	I0807 10:52:41.932985    8173 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:41.932990    8173 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:41.932993    8173 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (34.112709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-084000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-084000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.865375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-084000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-084000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-084000 -v=7 --alsologtostderr: (1.847925334s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-084000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-084000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.223176291s)

                                                
                                                
-- stdout --
	* [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	* Restarting existing qemu2 VM for "ha-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:43.992425    8196 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:43.992598    8196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:43.992603    8196 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:43.992607    8196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:43.992792    8196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:43.994183    8196 out.go:298] Setting JSON to false
	I0807 10:52:44.014229    8196 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4933,"bootTime":1723048231,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:52:44.014300    8196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:52:44.019212    8196 out.go:177] * [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:52:44.025156    8196 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:52:44.025185    8196 notify.go:220] Checking for updates...
	I0807 10:52:44.032069    8196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:52:44.035132    8196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:52:44.038209    8196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:52:44.043897    8196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:52:44.047146    8196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:52:44.050373    8196 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:44.050428    8196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:52:44.054085    8196 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:52:44.061140    8196 start.go:297] selected driver: qemu2
	I0807 10:52:44.061146    8196 start.go:901] validating driver "qemu2" against &{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:52:44.061197    8196 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:52:44.063550    8196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:52:44.063574    8196 cni.go:84] Creating CNI manager for ""
	I0807 10:52:44.063581    8196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 10:52:44.063640    8196 start.go:340] cluster config:
	{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:52:44.067331    8196 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:52:44.075115    8196 out.go:177] * Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	I0807 10:52:44.078986    8196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:52:44.079004    8196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:52:44.079038    8196 cache.go:56] Caching tarball of preloaded images
	I0807 10:52:44.079108    8196 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:52:44.079113    8196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:52:44.079165    8196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/ha-084000/config.json ...
	I0807 10:52:44.079630    8196 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:52:44.079666    8196 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "ha-084000"
	I0807 10:52:44.079675    8196 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:52:44.079682    8196 fix.go:54] fixHost starting: 
	I0807 10:52:44.079807    8196 fix.go:112] recreateIfNeeded on ha-084000: state=Stopped err=<nil>
	W0807 10:52:44.079816    8196 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:52:44.088110    8196 out.go:177] * Restarting existing qemu2 VM for "ha-084000" ...
	I0807 10:52:44.092128    8196 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:52:44.092164    8196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5e:ac:15:44:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:52:44.094253    8196 main.go:141] libmachine: STDOUT: 
	I0807 10:52:44.094271    8196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:52:44.094298    8196 fix.go:56] duration metric: took 14.616625ms for fixHost
	I0807 10:52:44.094304    8196 start.go:83] releasing machines lock for "ha-084000", held for 14.633542ms
	W0807 10:52:44.094310    8196 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:52:44.094345    8196 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:52:44.094350    8196 start.go:729] Will try again in 5 seconds ...
	I0807 10:52:49.096538    8196 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:52:49.096868    8196 start.go:364] duration metric: took 245.708µs to acquireMachinesLock for "ha-084000"
	I0807 10:52:49.097006    8196 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:52:49.097022    8196 fix.go:54] fixHost starting: 
	I0807 10:52:49.097665    8196 fix.go:112] recreateIfNeeded on ha-084000: state=Stopped err=<nil>
	W0807 10:52:49.097688    8196 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:52:49.102192    8196 out.go:177] * Restarting existing qemu2 VM for "ha-084000" ...
	I0807 10:52:49.105978    8196 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:52:49.106215    8196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5e:ac:15:44:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:52:49.114868    8196 main.go:141] libmachine: STDOUT: 
	I0807 10:52:49.114928    8196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:52:49.114994    8196 fix.go:56] duration metric: took 17.970875ms for fixHost
	I0807 10:52:49.115016    8196 start.go:83] releasing machines lock for "ha-084000", held for 18.122334ms
	W0807 10:52:49.115177    8196 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:52:49.121006    8196 out.go:177] 
	W0807 10:52:49.125065    8196 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:52:49.125089    8196 out.go:239] * 
	* 
	W0807 10:52:49.127524    8196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:52:49.135048    8196 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-084000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-084000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (32.669833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.733375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-084000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:49.277383    8208 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:49.277802    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:49.277811    8208 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:49.277814    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:49.277973    8208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:49.278188    8208 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:49.278384    8208 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:49.281868    8208 out.go:177] * The control-plane node ha-084000 host is not running: state=Stopped
	I0807 10:52:49.284740    8208 out.go:177]   To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-084000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (29.092875ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:49.316174    8210 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:49.316322    8210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:49.316325    8210 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:49.316327    8210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:49.316459    8210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:49.316575    8210 out.go:298] Setting JSON to false
	I0807 10:52:49.316585    8210 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:49.316652    8210 notify.go:220] Checking for updates...
	I0807 10:52:49.316779    8210 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:49.316785    8210 status.go:255] checking status of ha-084000 ...
	I0807 10:52:49.317002    8210 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:49.317006    8210 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:49.317008    8210 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.654708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-084000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-084000 stop -v=7 --alsologtostderr: (3.852333959s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr: exit status 7 (64.067792ms)

                                                
                                                
-- stdout --
	ha-084000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:53.336864    8241 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:53.337078    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:53.337082    8241 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:53.337085    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:53.337250    8241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:53.337404    8241 out.go:298] Setting JSON to false
	I0807 10:52:53.337417    8241 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:53.337453    8241 notify.go:220] Checking for updates...
	I0807 10:52:53.337696    8241 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:53.337707    8241 status.go:255] checking status of ha-084000 ...
	I0807 10:52:53.338015    8241 status.go:330] ha-084000 host status = "Stopped" (err=<nil>)
	I0807 10:52:53.338021    8241 status.go:343] host is not running, skipping remaining checks
	I0807 10:52:53.338023    8241 status.go:257] ha-084000 status: &{Name:ha-084000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-084000 status -v=7 --alsologtostderr": ha-084000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (32.734542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-084000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-084000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183238s)

                                                
                                                
-- stdout --
	* [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	* Restarting existing qemu2 VM for "ha-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:53.399480    8245 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:53.399602    8245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:53.399610    8245 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:53.399612    8245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:53.399745    8245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:53.400719    8245 out.go:298] Setting JSON to false
	I0807 10:52:53.416791    8245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4942,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:52:53.416861    8245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:52:53.422015    8245 out.go:177] * [ha-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:52:53.428975    8245 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:52:53.429027    8245 notify.go:220] Checking for updates...
	I0807 10:52:53.435891    8245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:52:53.439027    8245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:52:53.442016    8245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:52:53.444962    8245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:52:53.447946    8245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:52:53.451184    8245 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:53.451496    8245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:52:53.455907    8245 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:52:53.462924    8245 start.go:297] selected driver: qemu2
	I0807 10:52:53.462930    8245 start.go:901] validating driver "qemu2" against &{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:52:53.462980    8245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:52:53.465183    8245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:52:53.465207    8245 cni.go:84] Creating CNI manager for ""
	I0807 10:52:53.465213    8245 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 10:52:53.465257    8245 start.go:340] cluster config:
	{Name:ha-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-084000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:52:53.468855    8245 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:52:53.475925    8245 out.go:177] * Starting "ha-084000" primary control-plane node in "ha-084000" cluster
	I0807 10:52:53.479969    8245 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:52:53.479985    8245 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:52:53.480000    8245 cache.go:56] Caching tarball of preloaded images
	I0807 10:52:53.480065    8245 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:52:53.480071    8245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:52:53.480132    8245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/ha-084000/config.json ...
	I0807 10:52:53.480571    8245 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:52:53.480599    8245 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "ha-084000"
	I0807 10:52:53.480607    8245 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:52:53.480614    8245 fix.go:54] fixHost starting: 
	I0807 10:52:53.480724    8245 fix.go:112] recreateIfNeeded on ha-084000: state=Stopped err=<nil>
	W0807 10:52:53.480732    8245 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:52:53.488986    8245 out.go:177] * Restarting existing qemu2 VM for "ha-084000" ...
	I0807 10:52:53.492991    8245 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:52:53.493038    8245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5e:ac:15:44:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:52:53.495007    8245 main.go:141] libmachine: STDOUT: 
	I0807 10:52:53.495024    8245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:52:53.495053    8245 fix.go:56] duration metric: took 14.43875ms for fixHost
	I0807 10:52:53.495058    8245 start.go:83] releasing machines lock for "ha-084000", held for 14.454833ms
	W0807 10:52:53.495063    8245 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:52:53.495097    8245 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:52:53.495102    8245 start.go:729] Will try again in 5 seconds ...
	I0807 10:52:58.497264    8245 start.go:360] acquireMachinesLock for ha-084000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:52:58.497899    8245 start.go:364] duration metric: took 503.916µs to acquireMachinesLock for "ha-084000"
	I0807 10:52:58.498042    8245 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:52:58.498067    8245 fix.go:54] fixHost starting: 
	I0807 10:52:58.498759    8245 fix.go:112] recreateIfNeeded on ha-084000: state=Stopped err=<nil>
	W0807 10:52:58.498786    8245 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:52:58.507345    8245 out.go:177] * Restarting existing qemu2 VM for "ha-084000" ...
	I0807 10:52:58.510285    8245 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:52:58.510517    8245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5e:ac:15:44:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/ha-084000/disk.qcow2
	I0807 10:52:58.519666    8245 main.go:141] libmachine: STDOUT: 
	I0807 10:52:58.519723    8245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:52:58.519806    8245 fix.go:56] duration metric: took 21.746541ms for fixHost
	I0807 10:52:58.519825    8245 start.go:83] releasing machines lock for "ha-084000", held for 21.889416ms
	W0807 10:52:58.519970    8245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:52:58.526232    8245 out.go:177] 
	W0807 10:52:58.530339    8245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:52:58.530366    8245 out.go:239] * 
	* 
	W0807 10:52:58.532965    8245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:52:58.542402    8245 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-084000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (68.070334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-084000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-084000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-084000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.134833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-084000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:52:58.732096    8264 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:52:58.732250    8264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:58.732254    8264 out.go:304] Setting ErrFile to fd 2...
	I0807 10:52:58.732256    8264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:52:58.732396    8264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:52:58.732622    8264 mustload.go:65] Loading cluster: ha-084000
	I0807 10:52:58.732794    8264 config.go:182] Loaded profile config "ha-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:52:58.736843    8264 out.go:177] * The control-plane node ha-084000 host is not running: state=Stopped
	I0807 10:52:58.740830    8264 out.go:177]   To start a cluster, run: "minikube start -p ha-084000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-084000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.581792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-084000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-084000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-084000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-084000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-084000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-084000 -n ha-084000: exit status 7 (29.330958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-524000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-524000 --driver=qemu2 : exit status 80 (9.843181042s)

                                                
                                                
-- stdout --
	* [image-524000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-524000" primary control-plane node in "image-524000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-524000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-524000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-524000 -n image-524000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-524000 -n image-524000: exit status 7 (68.239583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-524000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.943739834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"421acd15-a7c7-4658-8a9d-139c61b908e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-845000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8427de48-93e5-4cca-9634-d110d674f1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"0517613c-cb19-467c-98c9-cf8e0655b318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig"}}
	{"specversion":"1.0","id":"ea481174-6292-4ee8-973e-694d52e5a357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a463505f-4b9b-4c4f-964b-ad84cc7acfa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"491a39cc-9175-4c20-b606-4056879425c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube"}}
	{"specversion":"1.0","id":"3b9fe5b9-0406-4971-8ff6-c6ea8c967f40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10e2ae38-25a6-45d8-9898-ebcf9a4669ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a024fac9-0a6b-48a8-8f34-74e1b7147903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"bb1ac19c-fff6-4065-b110-e2dc9b8d5629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-845000\" primary control-plane node in \"json-output-845000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2296635-1e1c-411e-8339-9d37c2589366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8de16151-d2fd-4406-8fc9-d16a26015395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-845000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bf81baa-0239-45f9-a4ad-2fa033a2fc22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"364395b5-2b56-4917-a44d-743c802d6a44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c5034754-20ab-4fce-b6f8-1727891875db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-845000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"556a69a6-55dd-4d0f-90ae-baa5425ae06c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"e155217d-0cd7-4903-8f9e-76eca73e3e68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.95s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser: exit status 83 (82.014291ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8107f58d-b9ef-4d1b-b05f-e237bac8ae1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-845000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"b61a6f67-a662-4208-9061-29409747ba66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-845000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser: exit status 83 (47.148958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-845000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-845000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-845000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-742000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-742000 --driver=qemu2 : exit status 80 (9.819123959s)

                                                
                                                
-- stdout --
	* [first-742000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-742000" primary control-plane node in "first-742000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-742000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-742000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-742000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-07 10:53:31.041367 -0700 PDT m=+463.632514501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-744000 -n second-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-744000 -n second-744000: exit status 85 (81.491125ms)

                                                
                                                
-- stdout --
	* Profile "second-744000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-744000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-744000" host is not running, skipping log retrieval (state="* Profile \"second-744000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-744000\"")
helpers_test.go:175: Cleaning up "second-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-744000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-07 10:53:31.232798 -0700 PDT m=+463.823946209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-742000 -n first-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-742000 -n first-742000: exit status 7 (29.417541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-742000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-742000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-742000
--- FAIL: TestMinikubeProfile (10.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-811000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-811000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.187702458s)

                                                
                                                
-- stdout --
	* [mount-start-1-811000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-811000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-811000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-811000 -n mount-start-1-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-811000 -n mount-start-1-811000: exit status 7 (68.163791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-190000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-190000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.758704041s)

                                                
                                                
-- stdout --
	* [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-190000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:53:41.810406    8417 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:53:41.810551    8417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:53:41.810555    8417 out.go:304] Setting ErrFile to fd 2...
	I0807 10:53:41.810557    8417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:53:41.810686    8417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:53:41.811812    8417 out.go:298] Setting JSON to false
	I0807 10:53:41.827934    8417 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4990,"bootTime":1723048231,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:53:41.827996    8417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:53:41.834401    8417 out.go:177] * [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:53:41.843307    8417 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:53:41.843372    8417 notify.go:220] Checking for updates...
	I0807 10:53:41.851172    8417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:53:41.854260    8417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:53:41.857277    8417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:53:41.858718    8417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:53:41.862273    8417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:53:41.865422    8417 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:53:41.869114    8417 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:53:41.876274    8417 start.go:297] selected driver: qemu2
	I0807 10:53:41.876281    8417 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:53:41.876289    8417 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:53:41.878657    8417 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:53:41.882294    8417 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:53:41.885310    8417 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:53:41.885359    8417 cni.go:84] Creating CNI manager for ""
	I0807 10:53:41.885365    8417 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 10:53:41.885370    8417 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 10:53:41.885411    8417 start.go:340] cluster config:
	{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:53:41.889206    8417 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:53:41.895196    8417 out.go:177] * Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	I0807 10:53:41.899270    8417 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:53:41.899290    8417 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:53:41.899302    8417 cache.go:56] Caching tarball of preloaded images
	I0807 10:53:41.899364    8417 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:53:41.899369    8417 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:53:41.899567    8417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/multinode-190000/config.json ...
	I0807 10:53:41.899578    8417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/multinode-190000/config.json: {Name:mke4bf4c2444c87968a07077d4c90160fc396d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:53:41.899951    8417 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:53:41.899984    8417 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "multinode-190000"
	I0807 10:53:41.899994    8417 start.go:93] Provisioning new machine with config: &{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:53:41.900036    8417 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:53:41.907291    8417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:53:41.924640    8417 start.go:159] libmachine.API.Create for "multinode-190000" (driver="qemu2")
	I0807 10:53:41.924670    8417 client.go:168] LocalClient.Create starting
	I0807 10:53:41.924731    8417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:53:41.924759    8417 main.go:141] libmachine: Decoding PEM data...
	I0807 10:53:41.924767    8417 main.go:141] libmachine: Parsing certificate...
	I0807 10:53:41.924803    8417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:53:41.924825    8417 main.go:141] libmachine: Decoding PEM data...
	I0807 10:53:41.924834    8417 main.go:141] libmachine: Parsing certificate...
	I0807 10:53:41.925326    8417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:53:42.081542    8417 main.go:141] libmachine: Creating SSH key...
	I0807 10:53:42.129574    8417 main.go:141] libmachine: Creating Disk image...
	I0807 10:53:42.129581    8417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:53:42.129816    8417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:42.138981    8417 main.go:141] libmachine: STDOUT: 
	I0807 10:53:42.138998    8417 main.go:141] libmachine: STDERR: 
	I0807 10:53:42.139040    8417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2 +20000M
	I0807 10:53:42.146834    8417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:53:42.146849    8417 main.go:141] libmachine: STDERR: 
	I0807 10:53:42.146859    8417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:42.146865    8417 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:53:42.146875    8417 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:53:42.146914    8417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b2:55:ac:81:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:42.148490    8417 main.go:141] libmachine: STDOUT: 
	I0807 10:53:42.148505    8417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:53:42.148523    8417 client.go:171] duration metric: took 223.849625ms to LocalClient.Create
	I0807 10:53:44.150674    8417 start.go:128] duration metric: took 2.25063725s to createHost
	I0807 10:53:44.150750    8417 start.go:83] releasing machines lock for "multinode-190000", held for 2.250772667s
	W0807 10:53:44.150798    8417 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:53:44.160772    8417 out.go:177] * Deleting "multinode-190000" in qemu2 ...
	W0807 10:53:44.195302    8417 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:53:44.195318    8417 start.go:729] Will try again in 5 seconds ...
	I0807 10:53:49.197516    8417 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:53:49.197884    8417 start.go:364] duration metric: took 293.167µs to acquireMachinesLock for "multinode-190000"
	I0807 10:53:49.197997    8417 start.go:93] Provisioning new machine with config: &{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:53:49.198263    8417 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:53:49.208785    8417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:53:49.255536    8417 start.go:159] libmachine.API.Create for "multinode-190000" (driver="qemu2")
	I0807 10:53:49.255589    8417 client.go:168] LocalClient.Create starting
	I0807 10:53:49.255708    8417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:53:49.255786    8417 main.go:141] libmachine: Decoding PEM data...
	I0807 10:53:49.255807    8417 main.go:141] libmachine: Parsing certificate...
	I0807 10:53:49.255879    8417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:53:49.255931    8417 main.go:141] libmachine: Decoding PEM data...
	I0807 10:53:49.255945    8417 main.go:141] libmachine: Parsing certificate...
	I0807 10:53:49.256476    8417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:53:49.423795    8417 main.go:141] libmachine: Creating SSH key...
	I0807 10:53:49.471202    8417 main.go:141] libmachine: Creating Disk image...
	I0807 10:53:49.471207    8417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:53:49.471426    8417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:49.480516    8417 main.go:141] libmachine: STDOUT: 
	I0807 10:53:49.480534    8417 main.go:141] libmachine: STDERR: 
	I0807 10:53:49.480590    8417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2 +20000M
	I0807 10:53:49.488289    8417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:53:49.488308    8417 main.go:141] libmachine: STDERR: 
	I0807 10:53:49.488320    8417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:49.488324    8417 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:53:49.488335    8417 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:53:49.488364    8417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:85:8f:3d:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:53:49.489976    8417 main.go:141] libmachine: STDOUT: 
	I0807 10:53:49.489993    8417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:53:49.490006    8417 client.go:171] duration metric: took 234.41225ms to LocalClient.Create
	I0807 10:53:51.492170    8417 start.go:128] duration metric: took 2.293884125s to createHost
	I0807 10:53:51.492244    8417 start.go:83] releasing machines lock for "multinode-190000", held for 2.294353792s
	W0807 10:53:51.492584    8417 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:53:51.507428    8417 out.go:177] 
	W0807 10:53:51.511395    8417 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:53:51.511421    8417 out.go:239] * 
	* 
	W0807 10:53:51.514134    8417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:53:51.527305    8417 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-190000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (66.479083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (113.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.129625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-190000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- rollout status deployment/busybox: exit status 1 (55.697292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.477ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.168125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.967792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.545333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.611417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.26675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.966166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.90425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.712042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.909458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.505166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.655709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.215ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.897792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.14475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.318583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (113.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-190000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.2335ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.387208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-190000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-190000 -v 3 --alsologtostderr: exit status 83 (43.443875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-190000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-190000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:45.223871    8548 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:45.224051    8548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.224054    8548 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:45.224056    8548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.224191    8548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:45.224423    8548 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:45.224612    8548 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:45.228927    8548 out.go:177] * The control-plane node multinode-190000 host is not running: state=Stopped
	I0807 10:55:45.232870    8548 out.go:177]   To start a cluster, run: "minikube start -p multinode-190000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-190000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.310667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-190000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-190000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.90675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-190000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-190000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-190000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (30.369917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-190000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-190000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-190000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-190000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status --output json --alsologtostderr: exit status 7 (30.19225ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-190000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:45.432085    8560 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:45.432228    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.432231    8560 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:45.432233    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.432367    8560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:45.432486    8560 out.go:298] Setting JSON to true
	I0807 10:55:45.432498    8560 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:45.432545    8560 notify.go:220] Checking for updates...
	I0807 10:55:45.432686    8560 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:45.432693    8560 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:45.432901    8560 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:45.432905    8560 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:45.432907    8560 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-190000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.227625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 node stop m03: exit status 85 (48.858625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-190000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status: exit status 7 (30.369292ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr: exit status 7 (29.695833ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:45.571026    8568 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:45.571170    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.571173    8568 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:45.571175    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.571325    8568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:45.571449    8568 out.go:298] Setting JSON to false
	I0807 10:55:45.571459    8568 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:45.571519    8568 notify.go:220] Checking for updates...
	I0807 10:55:45.571654    8568 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:45.571660    8568 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:45.571881    8568 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:45.571885    8568 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:45.571887    8568 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr": multinode-190000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (30.048458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.321583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:45.631123    8572 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:45.631594    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.631598    8572 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:45.631601    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.631749    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:45.631968    8572 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:45.632180    8572 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:45.636720    8572 out.go:177] 
	W0807 10:55:45.639710    8572 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0807 10:55:45.639715    8572 out.go:239] * 
	* 
	W0807 10:55:45.641722    8572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:55:45.645725    8572 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0807 10:55:45.631123    8572 out.go:291] Setting OutFile to fd 1 ...
I0807 10:55:45.631594    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:55:45.631598    8572 out.go:304] Setting ErrFile to fd 2...
I0807 10:55:45.631601    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 10:55:45.631749    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
I0807 10:55:45.631968    8572 mustload.go:65] Loading cluster: multinode-190000
I0807 10:55:45.632180    8572 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 10:55:45.636720    8572 out.go:177] 
W0807 10:55:45.639710    8572 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0807 10:55:45.639715    8572 out.go:239] * 
* 
W0807 10:55:45.641722    8572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0807 10:55:45.645725    8572 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-190000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (29.644125ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:45.678548    8574 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:45.678709    8574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.678715    8574 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:45.678718    8574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:45.678856    8574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:45.678971    8574 out.go:298] Setting JSON to false
	I0807 10:55:45.678980    8574 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:45.679036    8574 notify.go:220] Checking for updates...
	I0807 10:55:45.679190    8574 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:45.679196    8574 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:45.679402    8574 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:45.679406    8574 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:45.679409    8574 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (70.926458ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:46.536130    8576 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:46.536341    8576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:46.536352    8576 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:46.536356    8576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:46.536565    8576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:46.536747    8576 out.go:298] Setting JSON to false
	I0807 10:55:46.536762    8576 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:46.536801    8576 notify.go:220] Checking for updates...
	I0807 10:55:46.537045    8576 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:46.537056    8576 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:46.537367    8576 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:46.537372    8576 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:46.537375    8576 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (71.582459ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:47.818037    8580 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:47.818224    8580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:47.818228    8580 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:47.818230    8580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:47.818392    8580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:47.818567    8580 out.go:298] Setting JSON to false
	I0807 10:55:47.818580    8580 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:47.818616    8580 notify.go:220] Checking for updates...
	I0807 10:55:47.818838    8580 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:47.818846    8580 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:47.819140    8580 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:47.819145    8580 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:47.819148    8580 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (73.4745ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:50.753824    8586 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:50.754042    8586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:50.754047    8586 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:50.754050    8586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:50.754229    8586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:50.754376    8586 out.go:298] Setting JSON to false
	I0807 10:55:50.754389    8586 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:50.754429    8586 notify.go:220] Checking for updates...
	I0807 10:55:50.754673    8586 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:50.754681    8586 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:50.754954    8586 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:50.754959    8586 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:50.754962    8586 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (73.189375ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:55.286353    8590 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:55.286539    8590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:55.286548    8590 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:55.286551    8590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:55.286728    8590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:55.286884    8590 out.go:298] Setting JSON to false
	I0807 10:55:55.286897    8590 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:55.286937    8590 notify.go:220] Checking for updates...
	I0807 10:55:55.287155    8590 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:55.287163    8590 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:55.287455    8590 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:55.287459    8590 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:55.287462    8590 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (72.112792ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:55:59.442111    8592 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:55:59.442327    8592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:59.442331    8592 out.go:304] Setting ErrFile to fd 2...
	I0807 10:55:59.442334    8592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:55:59.442523    8592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:55:59.442669    8592 out.go:298] Setting JSON to false
	I0807 10:55:59.442682    8592 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:55:59.442725    8592 notify.go:220] Checking for updates...
	I0807 10:55:59.442922    8592 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:55:59.442929    8592 status.go:255] checking status of multinode-190000 ...
	I0807 10:55:59.443208    8592 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:55:59.443213    8592 status.go:343] host is not running, skipping remaining checks
	I0807 10:55:59.443216    8592 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (74.452792ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:05.401502    8596 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:05.401692    8596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:05.401697    8596 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:05.401700    8596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:05.401924    8596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:05.402106    8596 out.go:298] Setting JSON to false
	I0807 10:56:05.402124    8596 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:56:05.402164    8596 notify.go:220] Checking for updates...
	I0807 10:56:05.402377    8596 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:05.402384    8596 status.go:255] checking status of multinode-190000 ...
	I0807 10:56:05.402683    8596 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:56:05.402689    8596 status.go:343] host is not running, skipping remaining checks
	I0807 10:56:05.402692    8596 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (72.976417ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:19.752463    8608 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:19.752705    8608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:19.752710    8608 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:19.752714    8608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:19.752901    8608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:19.753096    8608 out.go:298] Setting JSON to false
	I0807 10:56:19.753119    8608 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:56:19.753163    8608 notify.go:220] Checking for updates...
	I0807 10:56:19.753411    8608 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:19.753418    8608 status.go:255] checking status of multinode-190000 ...
	I0807 10:56:19.753684    8608 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:56:19.753689    8608 status.go:343] host is not running, skipping remaining checks
	I0807 10:56:19.753692    8608 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr: exit status 7 (70.874041ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:36.759583    8620 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:36.759794    8620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:36.759799    8620 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:36.759803    8620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:36.759997    8620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:36.760164    8620 out.go:298] Setting JSON to false
	I0807 10:56:36.760177    8620 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:56:36.760219    8620 notify.go:220] Checking for updates...
	I0807 10:56:36.760468    8620 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:36.760477    8620 status.go:255] checking status of multinode-190000 ...
	I0807 10:56:36.760761    8620 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:56:36.760766    8620 status.go:343] host is not running, skipping remaining checks
	I0807 10:56:36.760769    8620 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-190000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (33.15675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-190000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-190000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-190000: (3.3742985s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-190000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-190000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224653708s)

                                                
                                                
-- stdout --
	* [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	* Restarting existing qemu2 VM for "multinode-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:40.264266    8646 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:40.264429    8646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:40.264434    8646 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:40.264437    8646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:40.264608    8646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:40.265871    8646 out.go:298] Setting JSON to false
	I0807 10:56:40.285294    8646 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5169,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:56:40.285360    8646 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:56:40.290833    8646 out.go:177] * [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:56:40.297834    8646 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:56:40.297883    8646 notify.go:220] Checking for updates...
	I0807 10:56:40.304762    8646 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:56:40.307830    8646 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:56:40.310809    8646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:56:40.313788    8646 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:56:40.316765    8646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:56:40.320075    8646 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:40.320136    8646 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:56:40.324734    8646 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:56:40.330751    8646 start.go:297] selected driver: qemu2
	I0807 10:56:40.330759    8646 start.go:901] validating driver "qemu2" against &{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:56:40.330840    8646 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:56:40.333023    8646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:56:40.333064    8646 cni.go:84] Creating CNI manager for ""
	I0807 10:56:40.333070    8646 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 10:56:40.333110    8646 start.go:340] cluster config:
	{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:56:40.336732    8646 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:56:40.344825    8646 out.go:177] * Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	I0807 10:56:40.348784    8646 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:56:40.348804    8646 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:56:40.348813    8646 cache.go:56] Caching tarball of preloaded images
	I0807 10:56:40.348877    8646 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:56:40.348883    8646 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:56:40.348950    8646 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/multinode-190000/config.json ...
	I0807 10:56:40.349435    8646 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:56:40.349473    8646 start.go:364] duration metric: took 31.792µs to acquireMachinesLock for "multinode-190000"
	I0807 10:56:40.349482    8646 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:56:40.349491    8646 fix.go:54] fixHost starting: 
	I0807 10:56:40.349623    8646 fix.go:112] recreateIfNeeded on multinode-190000: state=Stopped err=<nil>
	W0807 10:56:40.349634    8646 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:56:40.357729    8646 out.go:177] * Restarting existing qemu2 VM for "multinode-190000" ...
	I0807 10:56:40.361792    8646 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:56:40.361853    8646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:85:8f:3d:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:56:40.363911    8646 main.go:141] libmachine: STDOUT: 
	I0807 10:56:40.363929    8646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:56:40.363955    8646 fix.go:56] duration metric: took 14.465209ms for fixHost
	I0807 10:56:40.363961    8646 start.go:83] releasing machines lock for "multinode-190000", held for 14.483209ms
	W0807 10:56:40.363966    8646 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:56:40.363994    8646 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:56:40.363999    8646 start.go:729] Will try again in 5 seconds ...
	I0807 10:56:45.366256    8646 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:56:45.366670    8646 start.go:364] duration metric: took 316.417µs to acquireMachinesLock for "multinode-190000"
	I0807 10:56:45.366802    8646 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:56:45.366820    8646 fix.go:54] fixHost starting: 
	I0807 10:56:45.367558    8646 fix.go:112] recreateIfNeeded on multinode-190000: state=Stopped err=<nil>
	W0807 10:56:45.367588    8646 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:56:45.372196    8646 out.go:177] * Restarting existing qemu2 VM for "multinode-190000" ...
	I0807 10:56:45.377088    8646 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:56:45.377304    8646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:85:8f:3d:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:56:45.386578    8646 main.go:141] libmachine: STDOUT: 
	I0807 10:56:45.386651    8646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:56:45.386761    8646 fix.go:56] duration metric: took 19.942166ms for fixHost
	I0807 10:56:45.386824    8646 start.go:83] releasing machines lock for "multinode-190000", held for 20.092208ms
	W0807 10:56:45.387034    8646 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:56:45.396103    8646 out.go:177] 
	W0807 10:56:45.400193    8646 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:56:45.400216    8646 out.go:239] * 
	* 
	W0807 10:56:45.402639    8646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:56:45.410976    8646 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-190000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-190000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (32.1715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 node delete m03: exit status 83 (41.167083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-190000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-190000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-190000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr: exit status 7 (29.417166ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:45.595183    8662 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:45.595327    8662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:45.595331    8662 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:45.595333    8662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:45.595487    8662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:45.595602    8662 out.go:298] Setting JSON to false
	I0807 10:56:45.595612    8662 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:56:45.595666    8662 notify.go:220] Checking for updates...
	I0807 10:56:45.595812    8662 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:45.595822    8662 status.go:255] checking status of multinode-190000 ...
	I0807 10:56:45.596046    8662 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:56:45.596050    8662 status.go:343] host is not running, skipping remaining checks
	I0807 10:56:45.596052    8662 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.619125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-190000 stop: (2.118405875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status: exit status 7 (67.495042ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr: exit status 7 (33.000584ms)

                                                
                                                
-- stdout --
	multinode-190000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:47.844503    8682 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:47.844656    8682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:47.844659    8682 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:47.844661    8682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:47.844796    8682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:47.844906    8682 out.go:298] Setting JSON to false
	I0807 10:56:47.844916    8682 mustload.go:65] Loading cluster: multinode-190000
	I0807 10:56:47.844972    8682 notify.go:220] Checking for updates...
	I0807 10:56:47.845107    8682 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:47.845113    8682 status.go:255] checking status of multinode-190000 ...
	I0807 10:56:47.845326    8682 status.go:330] multinode-190000 host status = "Stopped" (err=<nil>)
	I0807 10:56:47.845330    8682 status.go:343] host is not running, skipping remaining checks
	I0807 10:56:47.845333    8682 status.go:257] multinode-190000 status: &{Name:multinode-190000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr": multinode-190000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-190000 status --alsologtostderr": multinode-190000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (30.378667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-190000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-190000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176168666s)

                                                
                                                
-- stdout --
	* [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	* Restarting existing qemu2 VM for "multinode-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:56:47.903209    8686 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:56:47.903330    8686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:47.903333    8686 out.go:304] Setting ErrFile to fd 2...
	I0807 10:56:47.903335    8686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:56:47.903483    8686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:56:47.904553    8686 out.go:298] Setting JSON to false
	I0807 10:56:47.920527    8686 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5176,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:56:47.920595    8686 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:56:47.925004    8686 out.go:177] * [multinode-190000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:56:47.931970    8686 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:56:47.931999    8686 notify.go:220] Checking for updates...
	I0807 10:56:47.938969    8686 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:56:47.941993    8686 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:56:47.944916    8686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:56:47.947959    8686 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:56:47.950978    8686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:56:47.954263    8686 config.go:182] Loaded profile config "multinode-190000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:56:47.954520    8686 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:56:47.958949    8686 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:56:47.965923    8686 start.go:297] selected driver: qemu2
	I0807 10:56:47.965928    8686 start.go:901] validating driver "qemu2" against &{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:56:47.965974    8686 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:56:47.968086    8686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:56:47.968112    8686 cni.go:84] Creating CNI manager for ""
	I0807 10:56:47.968120    8686 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 10:56:47.968166    8686 start.go:340] cluster config:
	{Name:multinode-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-190000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:56:47.971637    8686 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:56:47.978931    8686 out.go:177] * Starting "multinode-190000" primary control-plane node in "multinode-190000" cluster
	I0807 10:56:47.982934    8686 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:56:47.982951    8686 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:56:47.982967    8686 cache.go:56] Caching tarball of preloaded images
	I0807 10:56:47.983018    8686 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:56:47.983023    8686 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 10:56:47.983086    8686 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/multinode-190000/config.json ...
	I0807 10:56:47.983547    8686 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:56:47.983574    8686 start.go:364] duration metric: took 21.333µs to acquireMachinesLock for "multinode-190000"
	I0807 10:56:47.983583    8686 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:56:47.983590    8686 fix.go:54] fixHost starting: 
	I0807 10:56:47.983706    8686 fix.go:112] recreateIfNeeded on multinode-190000: state=Stopped err=<nil>
	W0807 10:56:47.983713    8686 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:56:47.991946    8686 out.go:177] * Restarting existing qemu2 VM for "multinode-190000" ...
	I0807 10:56:47.995939    8686 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:56:47.995970    8686 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:85:8f:3d:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:56:47.997941    8686 main.go:141] libmachine: STDOUT: 
	I0807 10:56:47.997959    8686 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:56:47.997986    8686 fix.go:56] duration metric: took 14.398833ms for fixHost
	I0807 10:56:47.997990    8686 start.go:83] releasing machines lock for "multinode-190000", held for 14.411625ms
	W0807 10:56:47.997996    8686 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:56:47.998037    8686 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:56:47.998041    8686 start.go:729] Will try again in 5 seconds ...
	I0807 10:56:53.000316    8686 start.go:360] acquireMachinesLock for multinode-190000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:56:53.000729    8686 start.go:364] duration metric: took 307.208µs to acquireMachinesLock for "multinode-190000"
	I0807 10:56:53.000853    8686 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:56:53.000876    8686 fix.go:54] fixHost starting: 
	I0807 10:56:53.001742    8686 fix.go:112] recreateIfNeeded on multinode-190000: state=Stopped err=<nil>
	W0807 10:56:53.001770    8686 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:56:53.006278    8686 out.go:177] * Restarting existing qemu2 VM for "multinode-190000" ...
	I0807 10:56:53.010213    8686 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:56:53.010513    8686 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:85:8f:3d:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/multinode-190000/disk.qcow2
	I0807 10:56:53.019605    8686 main.go:141] libmachine: STDOUT: 
	I0807 10:56:53.019662    8686 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:56:53.019747    8686 fix.go:56] duration metric: took 18.877458ms for fixHost
	I0807 10:56:53.019762    8686 start.go:83] releasing machines lock for "multinode-190000", held for 19.011458ms
	W0807 10:56:53.019993    8686 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:56:53.026273    8686 out.go:177] 
	W0807 10:56:53.029212    8686 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:56:53.029243    8686 out.go:239] * 
	* 
	W0807 10:56:53.031901    8686 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:56:53.040231    8686 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-190000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (68.619167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-190000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-190000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-190000-m01 --driver=qemu2 : exit status 80 (10.0117025s)

                                                
                                                
-- stdout --
	* [multinode-190000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-190000-m01" primary control-plane node in "multinode-190000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-190000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-190000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-190000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-190000-m02 --driver=qemu2 : exit status 80 (10.132062333s)

                                                
                                                
-- stdout --
	* [multinode-190000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-190000-m02" primary control-plane node in "multinode-190000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-190000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-190000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-190000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-190000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-190000: exit status 83 (79.948125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-190000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-190000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-190000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-190000 -n multinode-190000: exit status 7 (29.513291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                    
x
+
TestPreload (10.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-010000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-010000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.04678025s)

                                                
                                                
-- stdout --
	* [test-preload-010000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-010000" primary control-plane node in "test-preload-010000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-010000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:57:13.626169    8749 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:57:13.626297    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:13.626301    8749 out.go:304] Setting ErrFile to fd 2...
	I0807 10:57:13.626303    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:57:13.626423    8749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:57:13.627469    8749 out.go:298] Setting JSON to false
	I0807 10:57:13.643379    8749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5202,"bootTime":1723048231,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:57:13.643446    8749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:57:13.650085    8749 out.go:177] * [test-preload-010000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:57:13.657989    8749 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:57:13.658043    8749 notify.go:220] Checking for updates...
	I0807 10:57:13.666031    8749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:57:13.668963    8749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:57:13.672009    8749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:57:13.675043    8749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:57:13.678041    8749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:57:13.681271    8749 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:57:13.681330    8749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:57:13.686051    8749 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 10:57:13.692995    8749 start.go:297] selected driver: qemu2
	I0807 10:57:13.693001    8749 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:57:13.693007    8749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:57:13.695253    8749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:57:13.697988    8749 out.go:177] * Automatically selected the socket_vmnet network
	I0807 10:57:13.701091    8749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 10:57:13.701149    8749 cni.go:84] Creating CNI manager for ""
	I0807 10:57:13.701157    8749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:57:13.701162    8749 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:57:13.701198    8749 start.go:340] cluster config:
	{Name:test-preload-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-010000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:57:13.704833    8749 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.712066    8749 out.go:177] * Starting "test-preload-010000" primary control-plane node in "test-preload-010000" cluster
	I0807 10:57:13.715969    8749 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0807 10:57:13.716049    8749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/test-preload-010000/config.json ...
	I0807 10:57:13.716065    8749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/test-preload-010000/config.json: {Name:mkc55a9e138b35b1ec62680de0dcae6c25648da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:57:13.716064    8749 cache.go:107] acquiring lock: {Name:mk5e2b6546238d7c0154921386382b701b23a45a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716068    8749 cache.go:107] acquiring lock: {Name:mk19318b863b5fba618ef4f30bbda15ca5b0aa91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716082    8749 cache.go:107] acquiring lock: {Name:mk43da5439813f35f23e76a68adcb13cfff1b708 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716213    8749 cache.go:107] acquiring lock: {Name:mk40adfea855f04fbd9764f6e5b3cdda902f77be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716299    8749 cache.go:107] acquiring lock: {Name:mkaac034a84ce84abc4bace9df6a208c0774567f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716334    8749 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0807 10:57:13.716331    8749 cache.go:107] acquiring lock: {Name:mkcaa1db80cedda2d34e53e0f0edffcc9b3f6ead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716314    8749 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0807 10:57:13.716374    8749 cache.go:107] acquiring lock: {Name:mk1443afdac1d7afa876dd806edc9553c433707d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716387    8749 cache.go:107] acquiring lock: {Name:mk050dfaadfca60ceb12502879c1986a28fa68b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:57:13.716456    8749 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:57:13.716528    8749 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0807 10:57:13.716546    8749 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0807 10:57:13.716653    8749 start.go:360] acquireMachinesLock for test-preload-010000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:57:13.716663    8749 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0807 10:57:13.716692    8749 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0807 10:57:13.716692    8749 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "test-preload-010000"
	I0807 10:57:13.716735    8749 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:57:13.716732    8749 start.go:93] Provisioning new machine with config: &{Name:test-preload-010000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-010000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:57:13.716781    8749 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:57:13.723991    8749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:57:13.729945    8749 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0807 10:57:13.729945    8749 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0807 10:57:13.730037    8749 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0807 10:57:13.730074    8749 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0807 10:57:13.730138    8749 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0807 10:57:13.730257    8749 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:57:13.731219    8749 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0807 10:57:13.731428    8749 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:57:13.742082    8749 start.go:159] libmachine.API.Create for "test-preload-010000" (driver="qemu2")
	I0807 10:57:13.742103    8749 client.go:168] LocalClient.Create starting
	I0807 10:57:13.742167    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:57:13.742195    8749 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:13.742203    8749 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:13.742243    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:57:13.742264    8749 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:13.742271    8749 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:13.742665    8749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:57:13.899344    8749 main.go:141] libmachine: Creating SSH key...
	I0807 10:57:14.093566    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0807 10:57:14.106516    8749 main.go:141] libmachine: Creating Disk image...
	I0807 10:57:14.106522    8749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:57:14.106758    8749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:14.116472    8749 main.go:141] libmachine: STDOUT: 
	I0807 10:57:14.116489    8749 main.go:141] libmachine: STDERR: 
	I0807 10:57:14.116529    8749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2 +20000M
	I0807 10:57:14.118774    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0807 10:57:14.124930    8749 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:57:14.124936    8749 main.go:141] libmachine: STDERR: 
	I0807 10:57:14.124946    8749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:14.124950    8749 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:57:14.124956    8749 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:57:14.124982    8749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a7:e9:30:d4:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:14.126705    8749 main.go:141] libmachine: STDOUT: 
	I0807 10:57:14.126719    8749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:57:14.126736    8749 client.go:171] duration metric: took 384.631625ms to LocalClient.Create
	I0807 10:57:14.132299    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0807 10:57:14.150390    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0807 10:57:14.171203    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0807 10:57:14.216448    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0807 10:57:14.220344    8749 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0807 10:57:14.220367    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0807 10:57:14.297588    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0807 10:57:14.297622    8749 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 581.434666ms
	I0807 10:57:14.297640    8749 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0807 10:57:14.614260    8749 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0807 10:57:14.614347    8749 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0807 10:57:14.904529    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0807 10:57:14.904606    8749 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.188543958s
	I0807 10:57:14.904631    8749 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0807 10:57:16.126972    8749 start.go:128] duration metric: took 2.41018075s to createHost
	I0807 10:57:16.127026    8749 start.go:83] releasing machines lock for "test-preload-010000", held for 2.4103115s
	W0807 10:57:16.127103    8749 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:16.143465    8749 out.go:177] * Deleting "test-preload-010000" in qemu2 ...
	W0807 10:57:16.176762    8749 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:16.176788    8749 start.go:729] Will try again in 5 seconds ...
	I0807 10:57:16.444066    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0807 10:57:16.444117    8749 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.72779825s
	I0807 10:57:16.444152    8749 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0807 10:57:16.721500    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0807 10:57:16.721551    8749 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.005328375s
	I0807 10:57:16.721576    8749 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0807 10:57:17.808118    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0807 10:57:17.808167    8749 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.091894708s
	I0807 10:57:17.808190    8749 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0807 10:57:18.128607    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0807 10:57:18.128660    8749 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.412630125s
	I0807 10:57:18.128691    8749 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0807 10:57:18.588004    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0807 10:57:18.588056    8749 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.872027709s
	I0807 10:57:18.588080    8749 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0807 10:57:21.177013    8749 start.go:360] acquireMachinesLock for test-preload-010000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:57:21.177433    8749 start.go:364] duration metric: took 352.584µs to acquireMachinesLock for "test-preload-010000"
	I0807 10:57:21.177538    8749 start.go:93] Provisioning new machine with config: &{Name:test-preload-010000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-010000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 10:57:21.177786    8749 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 10:57:21.190298    8749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 10:57:21.241484    8749 start.go:159] libmachine.API.Create for "test-preload-010000" (driver="qemu2")
	I0807 10:57:21.241530    8749 client.go:168] LocalClient.Create starting
	I0807 10:57:21.241687    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 10:57:21.241758    8749 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:21.241780    8749 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:21.241855    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 10:57:21.241898    8749 main.go:141] libmachine: Decoding PEM data...
	I0807 10:57:21.241914    8749 main.go:141] libmachine: Parsing certificate...
	I0807 10:57:21.242435    8749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 10:57:21.409625    8749 main.go:141] libmachine: Creating SSH key...
	I0807 10:57:21.578834    8749 main.go:141] libmachine: Creating Disk image...
	I0807 10:57:21.578852    8749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 10:57:21.579083    8749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:21.588639    8749 main.go:141] libmachine: STDOUT: 
	I0807 10:57:21.588658    8749 main.go:141] libmachine: STDERR: 
	I0807 10:57:21.588703    8749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2 +20000M
	I0807 10:57:21.596786    8749 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 10:57:21.596812    8749 main.go:141] libmachine: STDERR: 
	I0807 10:57:21.596826    8749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:21.596829    8749 main.go:141] libmachine: Starting QEMU VM...
	I0807 10:57:21.596841    8749 qemu.go:418] Using hvf for hardware acceleration
	I0807 10:57:21.596882    8749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:dc:4d:9c:c2:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/test-preload-010000/disk.qcow2
	I0807 10:57:21.598644    8749 main.go:141] libmachine: STDOUT: 
	I0807 10:57:21.598658    8749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 10:57:21.598672    8749 client.go:171] duration metric: took 357.138458ms to LocalClient.Create
	I0807 10:57:22.764134    8749 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0807 10:57:22.764204    8749 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.047960834s
	I0807 10:57:22.764237    8749 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0807 10:57:22.764269    8749 cache.go:87] Successfully saved all images to host disk.
	I0807 10:57:23.600900    8749 start.go:128] duration metric: took 2.423087291s to createHost
	I0807 10:57:23.600952    8749 start.go:83] releasing machines lock for "test-preload-010000", held for 2.423513375s
	W0807 10:57:23.601259    8749 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-010000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-010000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 10:57:23.610965    8749 out.go:177] 
	W0807 10:57:23.618004    8749 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 10:57:23.618031    8749 out.go:239] * 
	* 
	W0807 10:57:23.620518    8749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:57:23.630914    8749 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-010000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-07 10:57:23.648013 -0700 PDT m=+696.240834918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-010000 -n test-preload-010000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-010000 -n test-preload-010000: exit status 7 (66.609334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-010000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-010000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-010000
--- FAIL: TestPreload (10.20s)

                                                
                                    
x
+
TestScheduledStopUnix (10.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-774000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-774000 --memory=2048 --driver=qemu2 : exit status 80 (10.012921792s)

                                                
                                                
-- stdout --
	* [scheduled-stop-774000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-774000" primary control-plane node in "scheduled-stop-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-774000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-774000" primary control-plane node in "scheduled-stop-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-07 10:57:33.807371 -0700 PDT m=+706.400266251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-774000 -n scheduled-stop-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-774000 -n scheduled-stop-774000: exit status 7 (68.162792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-774000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-774000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-774000
--- FAIL: TestScheduledStopUnix (10.16s)

                                                
                                    
x
+
TestSkaffold (13.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe487954104 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe487954104 version: (1.059544167s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-572000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-572000 --memory=2600 --driver=qemu2 : exit status 80 (9.989038042s)

                                                
                                                
-- stdout --
	* [skaffold-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-572000" primary control-plane node in "skaffold-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-572000" primary control-plane node in "skaffold-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-07 10:57:46.935561 -0700 PDT m=+719.528550584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-572000 -n skaffold-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-572000 -n skaffold-572000: exit status 7 (61.635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-572000
--- FAIL: TestSkaffold (13.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (593.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.768293523 start -p running-upgrade-210000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.768293523 start -p running-upgrade-210000 --memory=2200 --vm-driver=qemu2 : (55.912141709s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-210000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-210000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.859945334s)

                                                
                                                
-- stdout --
	* [running-upgrade-210000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-210000" primary control-plane node in "running-upgrade-210000" cluster
	* Updating the running qemu2 "running-upgrade-210000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:59:26.050692    9168 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:59:26.050815    9168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:59:26.050820    9168 out.go:304] Setting ErrFile to fd 2...
	I0807 10:59:26.050822    9168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:59:26.050948    9168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:59:26.052006    9168 out.go:298] Setting JSON to false
	I0807 10:59:26.068641    9168 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5335,"bootTime":1723048231,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:59:26.068724    9168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:59:26.073032    9168 out.go:177] * [running-upgrade-210000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:59:26.080034    9168 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:59:26.080087    9168 notify.go:220] Checking for updates...
	I0807 10:59:26.086020    9168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:59:26.088935    9168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:59:26.092020    9168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:59:26.095056    9168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:59:26.098036    9168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:59:26.101312    9168 config.go:182] Loaded profile config "running-upgrade-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 10:59:26.104964    9168 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0807 10:59:26.107993    9168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:59:26.111981    9168 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:59:26.117961    9168 start.go:297] selected driver: qemu2
	I0807 10:59:26.117966    9168 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51250 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 10:59:26.118006    9168 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:59:26.120276    9168 cni.go:84] Creating CNI manager for ""
	I0807 10:59:26.120292    9168 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:59:26.120334    9168 start.go:340] cluster config:
	{Name:running-upgrade-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51250 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 10:59:26.120381    9168 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:59:26.129040    9168 out.go:177] * Starting "running-upgrade-210000" primary control-plane node in "running-upgrade-210000" cluster
	I0807 10:59:26.133014    9168 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 10:59:26.133030    9168 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0807 10:59:26.133038    9168 cache.go:56] Caching tarball of preloaded images
	I0807 10:59:26.133093    9168 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 10:59:26.133100    9168 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0807 10:59:26.133163    9168 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/config.json ...
	I0807 10:59:26.133649    9168 start.go:360] acquireMachinesLock for running-upgrade-210000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 10:59:26.133675    9168 start.go:364] duration metric: took 20.791µs to acquireMachinesLock for "running-upgrade-210000"
	I0807 10:59:26.133683    9168 start.go:96] Skipping create...Using existing machine configuration
	I0807 10:59:26.133689    9168 fix.go:54] fixHost starting: 
	I0807 10:59:26.134248    9168 fix.go:112] recreateIfNeeded on running-upgrade-210000: state=Running err=<nil>
	W0807 10:59:26.134256    9168 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 10:59:26.142005    9168 out.go:177] * Updating the running qemu2 "running-upgrade-210000" VM ...
	I0807 10:59:26.145996    9168 machine.go:94] provisionDockerMachine start ...
	I0807 10:59:26.146022    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.146125    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.146129    9168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 10:59:26.206671    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-210000
	
	I0807 10:59:26.206688    9168 buildroot.go:166] provisioning hostname "running-upgrade-210000"
	I0807 10:59:26.206759    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.206921    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.206927    9168 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-210000 && echo "running-upgrade-210000" | sudo tee /etc/hostname
	I0807 10:59:26.272965    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-210000
	
	I0807 10:59:26.273022    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.273143    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.273151    9168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-210000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-210000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-210000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 10:59:26.333584    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 10:59:26.333601    9168 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19389-6671/.minikube CaCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19389-6671/.minikube}
	I0807 10:59:26.333611    9168 buildroot.go:174] setting up certificates
	I0807 10:59:26.333615    9168 provision.go:84] configureAuth start
	I0807 10:59:26.333626    9168 provision.go:143] copyHostCerts
	I0807 10:59:26.333693    9168 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem, removing ...
	I0807 10:59:26.333702    9168 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem
	I0807 10:59:26.333819    9168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem (1082 bytes)
	I0807 10:59:26.333967    9168 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem, removing ...
	I0807 10:59:26.333970    9168 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem
	I0807 10:59:26.334013    9168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem (1123 bytes)
	I0807 10:59:26.334106    9168 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem, removing ...
	I0807 10:59:26.334109    9168 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem
	I0807 10:59:26.334156    9168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem (1675 bytes)
	I0807 10:59:26.334236    9168 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-210000 san=[127.0.0.1 localhost minikube running-upgrade-210000]
	I0807 10:59:26.688421    9168 provision.go:177] copyRemoteCerts
	I0807 10:59:26.688470    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 10:59:26.688479    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 10:59:26.721378    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 10:59:26.728034    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0807 10:59:26.734991    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 10:59:26.741789    9168 provision.go:87] duration metric: took 408.167042ms to configureAuth
	I0807 10:59:26.741797    9168 buildroot.go:189] setting minikube options for container-runtime
	I0807 10:59:26.741904    9168 config.go:182] Loaded profile config "running-upgrade-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 10:59:26.741935    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.742030    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.742035    9168 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 10:59:26.805334    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 10:59:26.805343    9168 buildroot.go:70] root file system type: tmpfs
	I0807 10:59:26.805398    9168 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 10:59:26.805446    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.805560    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.805599    9168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 10:59:26.870323    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 10:59:26.870379    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.870492    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.870503    9168 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 10:59:26.931986    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 10:59:26.931997    9168 machine.go:97] duration metric: took 786.001166ms to provisionDockerMachine
	I0807 10:59:26.932002    9168 start.go:293] postStartSetup for "running-upgrade-210000" (driver="qemu2")
	I0807 10:59:26.932009    9168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 10:59:26.932060    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 10:59:26.932073    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 10:59:26.966555    9168 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 10:59:26.967838    9168 info.go:137] Remote host: Buildroot 2021.02.12
	I0807 10:59:26.967847    9168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/addons for local assets ...
	I0807 10:59:26.967919    9168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/files for local assets ...
	I0807 10:59:26.968006    9168 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem -> 71662.pem in /etc/ssl/certs
	I0807 10:59:26.968102    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 10:59:26.972527    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /etc/ssl/certs/71662.pem (1708 bytes)
	I0807 10:59:26.979951    9168 start.go:296] duration metric: took 47.94325ms for postStartSetup
	I0807 10:59:26.979965    9168 fix.go:56] duration metric: took 846.284167ms for fixHost
	I0807 10:59:26.980009    9168 main.go:141] libmachine: Using SSH client type: native
	I0807 10:59:26.980127    9168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10459aa10] 0x10459d270 <nil>  [] 0s} localhost 51218 <nil> <nil>}
	I0807 10:59:26.980133    9168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 10:59:27.041704    9168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053566.830224389
	
	I0807 10:59:27.041712    9168 fix.go:216] guest clock: 1723053566.830224389
	I0807 10:59:27.041719    9168 fix.go:229] Guest: 2024-08-07 10:59:26.830224389 -0700 PDT Remote: 2024-08-07 10:59:26.979967 -0700 PDT m=+0.949230918 (delta=-149.742611ms)
	I0807 10:59:27.041735    9168 fix.go:200] guest clock delta is within tolerance: -149.742611ms
	I0807 10:59:27.041737    9168 start.go:83] releasing machines lock for "running-upgrade-210000", held for 908.065334ms
	I0807 10:59:27.041790    9168 ssh_runner.go:195] Run: cat /version.json
	I0807 10:59:27.041799    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 10:59:27.041790    9168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 10:59:27.041841    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	W0807 10:59:27.042342    9168 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51326->127.0.0.1:51218: read: connection reset by peer
	I0807 10:59:27.042360    9168 retry.go:31] will retry after 260.444595ms: ssh: handshake failed: read tcp 127.0.0.1:51326->127.0.0.1:51218: read: connection reset by peer
	W0807 10:59:27.072502    9168 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0807 10:59:27.072560    9168 ssh_runner.go:195] Run: systemctl --version
	I0807 10:59:27.074457    9168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 10:59:27.076204    9168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 10:59:27.076234    9168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0807 10:59:27.079030    9168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0807 10:59:27.083534    9168 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 10:59:27.083545    9168 start.go:495] detecting cgroup driver to use...
	I0807 10:59:27.083655    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 10:59:27.088817    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0807 10:59:27.092146    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 10:59:27.095191    9168 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 10:59:27.095215    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 10:59:27.098237    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 10:59:27.101030    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 10:59:27.104317    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 10:59:27.107725    9168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 10:59:27.119549    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 10:59:27.122786    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 10:59:27.125504    9168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 10:59:27.128875    9168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 10:59:27.132148    9168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 10:59:27.134926    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:27.228289    9168 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 10:59:27.238704    9168 start.go:495] detecting cgroup driver to use...
	I0807 10:59:27.238769    9168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 10:59:27.244886    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 10:59:27.249733    9168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 10:59:27.257311    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 10:59:27.264203    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 10:59:27.268820    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 10:59:27.274243    9168 ssh_runner.go:195] Run: which cri-dockerd
	I0807 10:59:27.275438    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 10:59:27.277973    9168 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 10:59:27.282801    9168 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 10:59:27.356819    9168 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 10:59:27.443680    9168 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 10:59:27.443744    9168 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 10:59:27.449120    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:27.524631    9168 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 10:59:30.043902    9168 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.519273291s)
	I0807 10:59:30.043971    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 10:59:30.048479    9168 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0807 10:59:30.054415    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 10:59:30.059338    9168 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 10:59:30.132502    9168 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 10:59:30.215690    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:30.282725    9168 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 10:59:30.288663    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 10:59:30.293357    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:30.379552    9168 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 10:59:30.420882    9168 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 10:59:30.420960    9168 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 10:59:30.423938    9168 start.go:563] Will wait 60s for crictl version
	I0807 10:59:30.423992    9168 ssh_runner.go:195] Run: which crictl
	I0807 10:59:30.425325    9168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 10:59:30.437096    9168 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0807 10:59:30.437171    9168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 10:59:30.449394    9168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 10:59:30.469591    9168 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0807 10:59:30.469723    9168 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0807 10:59:30.471157    9168 kubeadm.go:883] updating cluster {Name:running-upgrade-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51250 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0807 10:59:30.471200    9168 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 10:59:30.471243    9168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 10:59:30.481946    9168 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 10:59:30.481957    9168 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 10:59:30.482000    9168 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 10:59:30.485112    9168 ssh_runner.go:195] Run: which lz4
	I0807 10:59:30.486537    9168 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0807 10:59:30.487765    9168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 10:59:30.487776    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0807 10:59:31.368482    9168 docker.go:649] duration metric: took 881.98275ms to copy over tarball
	I0807 10:59:31.368538    9168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 10:59:32.556735    9168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188190917s)
	I0807 10:59:32.556751    9168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 10:59:32.573003    9168 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 10:59:32.576581    9168 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0807 10:59:32.581821    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:32.671705    9168 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 10:59:33.891526    9168 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.219812709s)
	I0807 10:59:33.891636    9168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 10:59:33.905300    9168 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 10:59:33.905308    9168 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 10:59:33.905313    9168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0807 10:59:33.909026    9168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:59:33.910769    9168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 10:59:33.912930    9168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:59:33.912997    9168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 10:59:33.915205    9168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 10:59:33.915295    9168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 10:59:33.916724    9168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 10:59:33.916764    9168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 10:59:33.918230    9168 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:59:33.918258    9168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 10:59:33.919848    9168 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0807 10:59:33.919848    9168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 10:59:33.920659    9168 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0807 10:59:33.920950    9168 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:59:33.922075    9168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0807 10:59:33.922849    9168 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0807 10:59:34.304877    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0807 10:59:34.322906    9168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0807 10:59:34.322928    9168 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 10:59:34.322931    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0807 10:59:34.322953    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0807 10:59:34.338987    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0807 10:59:34.340436    9168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0807 10:59:34.340451    9168 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 10:59:34.340495    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0807 10:59:34.347699    9168 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0807 10:59:34.347812    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:59:34.348916    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0807 10:59:34.351035    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0807 10:59:34.352476    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 10:59:34.362466    9168 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0807 10:59:34.362487    9168 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:59:34.362535    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0807 10:59:34.363045    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0807 10:59:34.368579    9168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0807 10:59:34.368601    9168 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 10:59:34.368652    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0807 10:59:34.375341    9168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0807 10:59:34.375366    9168 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 10:59:34.375424    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 10:59:34.379483    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0807 10:59:34.379613    9168 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0807 10:59:34.383342    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0807 10:59:34.386059    9168 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0807 10:59:34.386089    9168 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0807 10:59:34.386120    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0807 10:59:34.395953    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0807 10:59:34.401542    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0807 10:59:34.401571    9168 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0807 10:59:34.401590    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0807 10:59:34.404626    9168 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0807 10:59:34.404647    9168 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0807 10:59:34.404695    9168 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0807 10:59:34.414778    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0807 10:59:34.414909    9168 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0807 10:59:34.446898    9168 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0807 10:59:34.446915    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0807 10:59:34.450392    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0807 10:59:34.450512    9168 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0807 10:59:34.452774    9168 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0807 10:59:34.452784    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0807 10:59:34.468112    9168 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0807 10:59:34.468142    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0807 10:59:34.511793    9168 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0807 10:59:34.511903    9168 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:59:34.545913    9168 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0807 10:59:34.545940    9168 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0807 10:59:34.545946    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0807 10:59:34.562886    9168 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0807 10:59:34.562907    9168 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:59:34.562964    9168 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 10:59:34.634688    9168 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0807 10:59:34.754787    9168 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0807 10:59:34.754813    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0807 10:59:35.584154    9168 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.021136917s)
	I0807 10:59:35.584201    9168 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0807 10:59:35.584163    9168 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0807 10:59:35.584516    9168 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0807 10:59:35.590909    9168 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0807 10:59:35.590942    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0807 10:59:35.648355    9168 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0807 10:59:35.648368    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0807 10:59:35.889477    9168 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0807 10:59:35.889526    9168 cache_images.go:92] duration metric: took 1.984219667s to LoadCachedImages
	W0807 10:59:35.889567    9168 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0807 10:59:35.889575    9168 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0807 10:59:35.889633    9168 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-210000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 10:59:35.889695    9168 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 10:59:35.910093    9168 cni.go:84] Creating CNI manager for ""
	I0807 10:59:35.910108    9168 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:59:35.910113    9168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 10:59:35.910121    9168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-210000 NodeName:running-upgrade-210000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 10:59:35.910189    9168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-210000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 10:59:35.910252    9168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0807 10:59:35.913160    9168 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 10:59:35.913192    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 10:59:35.916270    9168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0807 10:59:35.921477    9168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 10:59:35.926378    9168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0807 10:59:35.931946    9168 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0807 10:59:35.933505    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 10:59:35.994970    9168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 10:59:35.999903    9168 certs.go:68] Setting up /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000 for IP: 10.0.2.15
	I0807 10:59:35.999910    9168 certs.go:194] generating shared ca certs ...
	I0807 10:59:35.999919    9168 certs.go:226] acquiring lock for ca certs: {Name:mkf594adfb50ee91964d2e538bbb4ff47398b8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:59:36.000993    9168 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key
	I0807 10:59:36.001027    9168 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key
	I0807 10:59:36.001033    9168 certs.go:256] generating profile certs ...
	I0807 10:59:36.001090    9168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.key
	I0807 10:59:36.001102    9168 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key.02140bcd
	I0807 10:59:36.001113    9168 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt.02140bcd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0807 10:59:36.057827    9168 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt.02140bcd ...
	I0807 10:59:36.057832    9168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt.02140bcd: {Name:mk9f4b07c8de8cae2aaaeb63b1d4844f246b5fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:59:36.059563    9168 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key.02140bcd ...
	I0807 10:59:36.059568    9168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key.02140bcd: {Name:mkec62a443a0744013e2cd790420199541745819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:59:36.059730    9168 certs.go:381] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt.02140bcd -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt
	I0807 10:59:36.059861    9168 certs.go:385] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key.02140bcd -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key
	I0807 10:59:36.059989    9168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/proxy-client.key
	I0807 10:59:36.060114    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem (1338 bytes)
	W0807 10:59:36.060143    9168 certs.go:480] ignoring /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166_empty.pem, impossibly tiny 0 bytes
	I0807 10:59:36.060148    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem (1675 bytes)
	I0807 10:59:36.060170    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem (1082 bytes)
	I0807 10:59:36.060190    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem (1123 bytes)
	I0807 10:59:36.060208    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem (1675 bytes)
	I0807 10:59:36.060246    9168 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem (1708 bytes)
	I0807 10:59:36.060629    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 10:59:36.067959    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 10:59:36.074698    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 10:59:36.082064    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 10:59:36.089163    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0807 10:59:36.096117    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 10:59:36.103239    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 10:59:36.110181    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 10:59:36.117189    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 10:59:36.124310    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem --> /usr/share/ca-certificates/7166.pem (1338 bytes)
	I0807 10:59:36.131587    9168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /usr/share/ca-certificates/71662.pem (1708 bytes)
	I0807 10:59:36.138369    9168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 10:59:36.143190    9168 ssh_runner.go:195] Run: openssl version
	I0807 10:59:36.145017    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71662.pem && ln -fs /usr/share/ca-certificates/71662.pem /etc/ssl/certs/71662.pem"
	I0807 10:59:36.148786    9168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71662.pem
	I0807 10:59:36.150396    9168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:47 /usr/share/ca-certificates/71662.pem
	I0807 10:59:36.150418    9168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71662.pem
	I0807 10:59:36.152201    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71662.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 10:59:36.154924    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 10:59:36.157862    9168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 10:59:36.159380    9168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0807 10:59:36.159397    9168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 10:59:36.161183    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 10:59:36.164245    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7166.pem && ln -fs /usr/share/ca-certificates/7166.pem /etc/ssl/certs/7166.pem"
	I0807 10:59:36.169368    9168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7166.pem
	I0807 10:59:36.171454    9168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:47 /usr/share/ca-certificates/7166.pem
	I0807 10:59:36.171491    9168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7166.pem
	I0807 10:59:36.174126    9168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7166.pem /etc/ssl/certs/51391683.0"
	I0807 10:59:36.183246    9168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 10:59:36.186028    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 10:59:36.189457    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 10:59:36.192752    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 10:59:36.194717    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 10:59:36.217772    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 10:59:36.220343    9168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 10:59:36.222962    9168 kubeadm.go:392] StartCluster: {Name:running-upgrade-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51250 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 10:59:36.223040    9168 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 10:59:36.248416    9168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 10:59:36.251854    9168 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 10:59:36.251863    9168 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 10:59:36.251897    9168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 10:59:36.254837    9168 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 10:59:36.254876    9168 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-210000" does not appear in /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:59:36.254890    9168 kubeconfig.go:62] /Users/jenkins/minikube-integration/19389-6671/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-210000" cluster setting kubeconfig missing "running-upgrade-210000" context setting]
	I0807 10:59:36.255060    9168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:59:36.255718    9168 kapi.go:59] client config for running-upgrade-210000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10592ff90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 10:59:36.256585    9168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 10:59:36.259928    9168 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-210000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0807 10:59:36.259935    9168 kubeadm.go:1160] stopping kube-system containers ...
	I0807 10:59:36.259987    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 10:59:36.276532    9168 docker.go:483] Stopping containers: [3f9b4a87fff7 5eff0d597a19 1365b534160e 8d068e32683a 9eb6d855d212 669398a7efd2 4e166f0c456a 61ffc7a70f75 a88a4a0a0efd 41d1ac132496 1b616c052d9d d18fd18a4cc5 b37af3a06a80 17a5b0471928 8ceae728ec44 bbef5694b3ed b8f3973d3fb7]
	I0807 10:59:36.276605    9168 ssh_runner.go:195] Run: docker stop 3f9b4a87fff7 5eff0d597a19 1365b534160e 8d068e32683a 9eb6d855d212 669398a7efd2 4e166f0c456a 61ffc7a70f75 a88a4a0a0efd 41d1ac132496 1b616c052d9d d18fd18a4cc5 b37af3a06a80 17a5b0471928 8ceae728ec44 bbef5694b3ed b8f3973d3fb7
	I0807 10:59:36.561097    9168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 10:59:36.660394    9168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 10:59:36.663802    9168 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug  7 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  7 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  7 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug  7 17:59 /etc/kubernetes/scheduler.conf
	
	I0807 10:59:36.663834    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf
	I0807 10:59:36.666789    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 10:59:36.666813    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 10:59:36.669412    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf
	I0807 10:59:36.672163    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 10:59:36.672190    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 10:59:36.674878    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf
	I0807 10:59:36.677481    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 10:59:36.677504    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 10:59:36.680237    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf
	I0807 10:59:36.682852    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 10:59:36.682869    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 10:59:36.685611    9168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 10:59:36.688347    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 10:59:36.721636    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 10:59:37.424821    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 10:59:37.604323    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 10:59:37.629180    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 10:59:37.657837    9168 api_server.go:52] waiting for apiserver process to appear ...
	I0807 10:59:37.657919    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 10:59:38.160012    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 10:59:38.660264    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 10:59:39.159996    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 10:59:39.164513    9168 api_server.go:72] duration metric: took 1.506688542s to wait for apiserver process to appear ...
	I0807 10:59:39.164522    9168 api_server.go:88] waiting for apiserver healthz status ...
	I0807 10:59:39.164533    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 10:59:44.166769    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 10:59:44.166855    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 10:59:49.167767    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 10:59:49.167838    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 10:59:54.168847    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 10:59:54.168890    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 10:59:59.169842    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 10:59:59.169912    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:04.171612    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:04.171693    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:09.173233    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:09.173316    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:14.175859    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:14.175942    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:19.178576    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:19.178665    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:24.181307    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:24.181380    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:29.182769    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:29.182849    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:34.185473    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:34.185551    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:39.188245    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:39.188696    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:00:39.224754    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:00:39.224890    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:00:39.245155    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:00:39.245270    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:00:39.259433    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:00:39.259500    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:00:39.271856    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:00:39.271932    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:00:39.282819    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:00:39.282889    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:00:39.293153    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:00:39.293240    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:00:39.303369    9168 logs.go:276] 0 containers: []
	W0807 11:00:39.303384    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:00:39.303452    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:00:39.313759    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:00:39.313787    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:00:39.313792    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:00:39.324902    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:00:39.324914    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:00:39.365198    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:00:39.365213    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:00:39.434373    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:00:39.434388    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:00:39.445369    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:00:39.445380    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:00:39.461392    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:00:39.461407    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:00:39.479621    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:00:39.479632    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:00:39.490999    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:00:39.491008    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:00:39.503433    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:00:39.503444    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:00:39.523132    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:00:39.523142    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:00:39.542097    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:00:39.542108    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:00:39.558308    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:00:39.558318    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:00:39.569788    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:00:39.569797    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:00:39.596638    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:00:39.596645    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:00:39.600670    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:00:39.600677    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:00:39.611520    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:00:39.611533    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:00:39.623076    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:00:39.623085    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:00:42.143183    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:47.146084    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:47.146503    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:00:47.189808    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:00:47.189935    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:00:47.210446    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:00:47.210536    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:00:47.225274    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:00:47.225346    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:00:47.237529    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:00:47.237607    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:00:47.248178    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:00:47.248249    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:00:47.263126    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:00:47.263182    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:00:47.275047    9168 logs.go:276] 0 containers: []
	W0807 11:00:47.275058    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:00:47.275113    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:00:47.285695    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:00:47.285716    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:00:47.285721    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:00:47.312389    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:00:47.312396    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:00:47.316920    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:00:47.316927    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:00:47.352223    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:00:47.352235    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:00:47.366823    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:00:47.366834    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:00:47.377620    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:00:47.377633    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:00:47.390002    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:00:47.390013    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:00:47.401364    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:00:47.401377    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:00:47.412421    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:00:47.412432    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:00:47.431320    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:00:47.431331    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:00:47.448220    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:00:47.448230    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:00:47.460853    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:00:47.460864    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:00:47.472730    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:00:47.472741    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:00:47.484886    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:00:47.484899    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:00:47.526090    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:00:47.526097    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:00:47.539998    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:00:47.540009    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:00:47.561741    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:00:47.561754    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:00:50.076840    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:00:55.079622    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:00:55.080044    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:00:55.119562    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:00:55.119686    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:00:55.140957    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:00:55.141074    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:00:55.159631    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:00:55.159712    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:00:55.172130    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:00:55.172209    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:00:55.183130    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:00:55.183192    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:00:55.193796    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:00:55.193878    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:00:55.207634    9168 logs.go:276] 0 containers: []
	W0807 11:00:55.207645    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:00:55.207698    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:00:55.220613    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:00:55.220643    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:00:55.220648    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:00:55.234779    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:00:55.234792    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:00:55.246276    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:00:55.246287    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:00:55.259599    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:00:55.259612    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:00:55.273635    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:00:55.273648    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:00:55.298267    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:00:55.298273    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:00:55.336414    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:00:55.336421    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:00:55.340786    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:00:55.340794    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:00:55.355190    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:00:55.355201    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:00:55.373662    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:00:55.373675    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:00:55.385724    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:00:55.385734    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:00:55.398676    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:00:55.398690    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:00:55.415850    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:00:55.415863    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:00:55.427586    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:00:55.427597    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:00:55.465094    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:00:55.465108    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:00:55.483093    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:00:55.483107    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:00:55.500926    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:00:55.500937    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:00:58.020213    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:03.022979    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:03.023403    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:03.062388    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:03.062530    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:03.086978    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:03.087090    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:03.101477    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:03.101560    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:03.112998    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:03.113072    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:03.123308    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:03.123370    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:03.134033    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:03.134108    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:03.144653    9168 logs.go:276] 0 containers: []
	W0807 11:01:03.144665    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:03.144716    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:03.154966    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:03.154980    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:03.154985    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:03.168620    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:03.168638    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:03.179952    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:03.179968    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:03.191414    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:03.191427    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:03.216551    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:03.216559    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:03.228002    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:03.228015    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:03.266988    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:03.266996    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:03.285737    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:03.285751    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:03.299770    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:03.299780    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:03.333963    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:03.333974    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:03.349239    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:03.349253    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:03.366928    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:03.366940    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:03.378088    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:03.378098    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:03.395529    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:03.395541    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:03.400510    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:03.400517    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:03.412183    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:03.412198    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:03.423180    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:03.423190    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:05.939704    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:10.942506    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:10.942943    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:10.982552    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:10.982689    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:11.004878    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:11.005021    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:11.020175    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:11.020260    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:11.036394    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:11.036462    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:11.047296    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:11.047364    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:11.058371    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:11.058441    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:11.068984    9168 logs.go:276] 0 containers: []
	W0807 11:01:11.068995    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:11.069048    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:11.079548    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:11.079565    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:11.079571    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:11.115541    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:11.115554    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:11.133610    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:11.133620    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:11.157362    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:11.157372    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:11.168909    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:11.168922    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:11.180418    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:11.180432    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:11.220644    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:11.220650    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:11.224982    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:11.224990    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:11.236410    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:11.236421    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:11.250790    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:11.250800    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:11.262732    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:11.262744    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:11.283937    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:11.283949    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:11.297708    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:11.297722    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:11.309613    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:11.309626    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:11.320652    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:11.320661    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:11.334444    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:11.334455    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:11.346130    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:11.346142    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:13.874062    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:18.876940    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:18.877407    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:18.918163    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:18.918291    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:18.939464    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:18.939565    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:18.954304    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:18.954401    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:18.966899    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:18.966966    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:18.977217    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:18.977280    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:18.987420    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:18.987481    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:18.997555    9168 logs.go:276] 0 containers: []
	W0807 11:01:18.997564    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:18.997615    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:19.008213    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:19.008228    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:19.008233    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:19.027940    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:19.027953    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:19.039266    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:19.039278    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:19.063866    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:19.063874    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:19.102492    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:19.102499    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:19.116343    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:19.116353    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:19.127466    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:19.127478    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:19.139669    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:19.139679    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:19.151368    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:19.151377    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:19.163263    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:19.163272    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:19.175130    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:19.175141    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:19.186441    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:19.186453    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:19.200495    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:19.200505    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:19.217813    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:19.217822    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:19.229230    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:19.229241    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:19.247072    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:19.247080    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:19.251930    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:19.251937    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:21.790125    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:26.792406    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:26.792650    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:26.812547    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:26.812641    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:26.827094    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:26.827172    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:26.839307    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:26.839376    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:26.850268    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:26.850337    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:26.860612    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:26.860684    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:26.871135    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:26.871197    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:26.882818    9168 logs.go:276] 0 containers: []
	W0807 11:01:26.882828    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:26.882881    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:26.897402    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:26.897419    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:26.897425    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:26.933271    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:26.933282    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:26.947317    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:26.947329    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:26.969544    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:26.969557    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:26.987396    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:26.987408    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:26.999404    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:26.999417    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:27.017778    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:27.017790    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:27.030963    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:27.030976    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:27.042732    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:27.042745    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:27.054469    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:27.054479    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:27.065268    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:27.065280    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:27.076982    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:27.076995    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:27.117474    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:27.117483    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:27.128572    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:27.128586    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:27.146760    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:27.146771    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:27.171948    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:27.171954    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:27.176284    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:27.176291    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:29.689551    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:34.692440    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:34.692860    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:34.733299    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:34.733441    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:34.755924    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:34.756032    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:34.771387    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:34.771460    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:34.783691    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:34.783758    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:34.802163    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:34.802231    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:34.812804    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:34.812870    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:34.827361    9168 logs.go:276] 0 containers: []
	W0807 11:01:34.827376    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:34.827428    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:34.838133    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:34.838150    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:34.838155    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:34.851708    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:34.851721    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:34.863186    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:34.863200    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:34.880785    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:34.880798    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:34.915883    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:34.915896    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:34.931466    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:34.931480    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:34.943260    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:34.943272    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:34.956918    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:34.956928    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:34.968944    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:34.968957    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:34.994438    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:34.994445    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:34.998537    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:34.998545    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:35.016175    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:35.016188    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:35.055946    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:35.055956    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:35.067285    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:35.067298    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:35.082918    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:35.082929    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:35.097292    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:35.097301    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:35.108909    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:35.108918    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:37.622383    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:42.624844    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:42.625282    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:42.665317    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:42.665448    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:42.687588    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:42.687685    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:42.702493    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:42.702563    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:42.715339    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:42.715404    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:42.726517    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:42.726585    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:42.737134    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:42.737201    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:42.747521    9168 logs.go:276] 0 containers: []
	W0807 11:01:42.747531    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:42.747584    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:42.760906    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:42.760925    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:42.760931    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:42.773332    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:42.773343    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:42.785246    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:42.785256    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:42.820995    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:42.821008    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:42.833143    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:42.833155    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:42.847759    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:42.847771    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:42.865489    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:42.865498    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:42.904752    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:42.904762    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:42.919016    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:42.919028    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:42.930614    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:42.930626    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:42.942491    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:42.942502    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:42.955075    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:42.955086    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:42.971058    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:42.971068    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:42.982944    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:42.982957    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:42.987127    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:42.987135    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:43.021127    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:43.021136    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:43.039137    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:43.039148    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:45.564878    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:50.567344    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:50.567784    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:50.601666    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:50.601821    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:50.629247    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:50.629331    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:50.642595    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:50.642666    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:50.653983    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:50.654043    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:50.664775    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:50.664844    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:50.675502    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:50.675566    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:50.685458    9168 logs.go:276] 0 containers: []
	W0807 11:01:50.685474    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:50.685531    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:50.695962    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:50.695980    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:50.695986    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:50.713404    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:50.713417    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:50.724813    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:50.724825    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:50.736500    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:50.736510    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:50.748192    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:50.748227    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:50.753115    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:50.753124    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:01:50.786906    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:50.786919    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:50.798524    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:50.798535    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:50.812055    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:50.812066    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:50.837756    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:50.837763    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:50.849346    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:50.849356    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:50.888980    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:50.888991    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:50.903260    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:50.903275    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:50.917285    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:50.917297    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:50.932568    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:50.932577    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:50.953291    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:50.953302    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:50.967391    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:50.967405    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:53.485876    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:01:58.487153    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:01:58.487290    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:01:58.498733    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:01:58.498810    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:01:58.509987    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:01:58.510054    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:01:58.520768    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:01:58.520822    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:01:58.535802    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:01:58.535869    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:01:58.546591    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:01:58.546652    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:01:58.557079    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:01:58.557139    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:01:58.567317    9168 logs.go:276] 0 containers: []
	W0807 11:01:58.567327    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:01:58.567380    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:01:58.587783    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:01:58.587803    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:01:58.587808    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:01:58.600149    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:01:58.600161    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:01:58.614188    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:01:58.614202    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:01:58.632565    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:01:58.632575    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:01:58.646462    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:01:58.646476    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:01:58.660540    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:01:58.660554    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:01:58.678613    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:01:58.678627    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:01:58.689922    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:01:58.689933    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:01:58.702277    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:01:58.702286    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:01:58.726507    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:01:58.726514    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:01:58.730670    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:01:58.730676    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:01:58.742116    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:01:58.742128    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:01:58.765933    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:01:58.765943    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:01:58.777867    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:01:58.777876    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:01:58.789665    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:01:58.789676    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:01:58.802202    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:01:58.802212    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:01:58.843618    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:01:58.843624    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:01.380865    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:06.381684    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:06.381968    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:06.411473    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:06.411589    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:06.429470    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:06.429571    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:06.442632    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:06.442706    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:06.454853    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:06.454925    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:06.465179    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:06.465247    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:06.487801    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:06.487875    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:06.498050    9168 logs.go:276] 0 containers: []
	W0807 11:02:06.498064    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:06.498126    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:06.508030    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:06.508047    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:06.508052    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:06.518792    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:06.518807    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:06.530396    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:06.530407    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:06.542494    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:06.542508    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:06.556471    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:06.556481    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:06.567982    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:06.567996    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:06.608588    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:06.608597    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:06.612984    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:06.612993    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:06.626969    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:06.626979    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:06.644889    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:06.644902    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:06.658972    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:06.658983    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:06.676383    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:06.676394    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:06.688373    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:06.688386    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:06.712449    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:06.712457    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:06.725560    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:06.725574    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:06.760173    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:06.760186    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:06.772455    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:06.772468    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:09.286268    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:14.288463    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:14.288685    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:14.308614    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:14.308712    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:14.324223    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:14.324291    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:14.336578    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:14.336648    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:14.348381    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:14.348447    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:14.359078    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:14.359144    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:14.369970    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:14.370035    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:14.381358    9168 logs.go:276] 0 containers: []
	W0807 11:02:14.381370    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:14.381426    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:14.392091    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:14.392111    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:14.392116    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:14.403884    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:14.403898    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:14.421586    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:14.421596    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:14.445042    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:14.445051    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:14.458835    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:14.458848    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:14.473448    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:14.473461    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:14.484826    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:14.484837    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:14.499866    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:14.499878    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:14.511620    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:14.511631    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:14.524062    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:14.524075    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:14.538092    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:14.538103    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:14.549620    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:14.549630    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:14.588395    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:14.588405    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:14.592903    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:14.592911    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:14.628130    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:14.628141    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:14.646101    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:14.646112    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:14.657651    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:14.657662    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:17.170822    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:22.172430    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:22.172545    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:22.189055    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:22.189127    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:22.204906    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:22.204975    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:22.215787    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:22.215853    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:22.227195    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:22.227269    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:22.241448    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:22.241528    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:22.254626    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:22.254700    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:22.269154    9168 logs.go:276] 0 containers: []
	W0807 11:02:22.269167    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:22.269227    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:22.280979    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:22.281002    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:22.281008    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:22.285747    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:22.285756    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:22.298746    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:22.298757    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:22.311855    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:22.311868    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:22.327151    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:22.327169    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:22.339758    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:22.339772    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:22.354765    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:22.354779    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:22.369509    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:22.369523    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:22.381050    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:22.381064    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:22.393773    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:22.393786    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:22.410279    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:22.410291    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:22.432046    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:22.432059    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:22.458084    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:22.458097    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:22.501747    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:22.501765    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:22.541776    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:22.541788    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:22.556229    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:22.556240    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:22.573807    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:22.573816    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:25.090086    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:30.090874    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:30.091345    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:30.131845    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:30.131986    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:30.152769    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:30.152866    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:30.167861    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:30.167939    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:30.181034    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:30.181108    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:30.192331    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:30.192395    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:30.203322    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:30.203398    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:30.214236    9168 logs.go:276] 0 containers: []
	W0807 11:02:30.214248    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:30.214303    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:30.224733    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:30.224750    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:30.224756    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:30.235826    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:30.235837    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:30.247653    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:30.247665    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:30.259183    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:30.259195    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:30.270448    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:30.270462    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:30.286619    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:30.286629    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:30.304022    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:30.304034    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:30.326106    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:30.326118    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:30.330285    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:30.330292    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:30.363964    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:30.363977    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:30.377961    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:30.377972    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:30.389707    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:30.389723    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:30.405481    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:30.405493    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:30.417178    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:30.417193    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:30.428613    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:30.428622    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:30.469632    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:30.469639    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:30.481866    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:30.481877    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:33.008079    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:38.010758    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:38.010881    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:38.022798    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:38.022866    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:38.033504    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:38.033575    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:38.044064    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:38.044130    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:38.055882    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:38.055976    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:38.066931    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:38.067016    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:38.077731    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:38.077804    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:38.092055    9168 logs.go:276] 0 containers: []
	W0807 11:02:38.092066    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:38.092127    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:38.103156    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:38.103175    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:38.103181    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:38.108039    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:38.108047    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:38.144319    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:38.144332    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:38.158222    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:38.158233    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:38.170699    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:38.170711    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:38.181641    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:38.181652    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:38.193855    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:38.193866    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:38.207664    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:38.207675    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:38.225498    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:38.225507    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:38.237142    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:38.237155    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:38.254054    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:38.254063    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:38.266973    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:38.266987    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:38.291623    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:38.291635    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:38.303553    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:38.303565    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:38.344684    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:38.344693    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:38.362660    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:38.362673    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:38.373837    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:38.373847    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:40.887310    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:45.888808    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:45.889021    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:45.900462    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:45.900539    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:45.912171    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:45.912241    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:45.923108    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:45.923176    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:45.933753    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:45.933815    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:45.944252    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:45.944308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:45.955580    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:45.955651    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:45.966136    9168 logs.go:276] 0 containers: []
	W0807 11:02:45.966148    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:45.966206    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:45.976545    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:45.976566    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:45.976571    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:45.987925    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:45.987936    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:45.999410    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:45.999424    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:46.013845    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:46.013854    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:46.025498    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:46.025507    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:46.046785    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:46.046795    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:46.062342    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:46.062353    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:46.074306    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:46.074317    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:46.078894    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:46.078900    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:46.092502    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:46.092513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:46.104269    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:46.104279    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:46.129319    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:46.129326    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:46.141301    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:46.141313    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:46.181340    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:46.181347    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:46.216979    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:46.216990    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:46.231076    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:46.231088    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:46.245401    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:46.245412    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:48.768810    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:53.771118    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:53.771220    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:53.785339    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:53.785411    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:53.799233    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:53.799300    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:53.809784    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:53.809849    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:53.820640    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:53.820709    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:53.831666    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:53.831729    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:53.843033    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:53.843099    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:53.853187    9168 logs.go:276] 0 containers: []
	W0807 11:02:53.853199    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:53.853257    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:53.863939    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:53.863959    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:53.863965    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:53.881384    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:53.881395    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:53.896087    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:53.896097    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:53.913484    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:53.913494    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:53.925131    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:53.925140    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:53.936764    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:53.936775    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:53.948010    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:53.948025    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:53.961975    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:53.961988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:54.001089    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:54.001101    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:54.012599    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:54.012610    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:54.024032    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:54.024044    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:54.047991    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:54.048000    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:54.052212    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:54.052218    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:54.069636    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:54.069646    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:54.081650    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:54.081666    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:54.092468    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:54.092478    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:54.130729    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:54.130737    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:56.644368    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:01.647144    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:01.647305    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:01.663991    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:01.664071    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:01.677760    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:01.677831    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:01.692232    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:01.692302    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:01.703486    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:01.703551    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:01.714596    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:01.714661    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:01.729048    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:01.729109    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:01.738928    9168 logs.go:276] 0 containers: []
	W0807 11:03:01.738939    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:01.738990    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:01.750137    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:01.750157    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:01.750163    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:01.762272    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:01.762286    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:01.767667    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:01.767680    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:01.782338    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:01.782349    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:01.795521    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:01.795544    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:01.808425    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:01.808437    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:01.823537    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:01.823553    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:01.835498    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:01.835513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:01.849020    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:01.849032    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:01.873841    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:01.873855    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:01.913441    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:01.913453    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:01.925913    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:01.925926    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:01.948375    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:01.948385    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:01.966295    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:01.966311    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:01.980833    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:01.980844    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:01.992970    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:01.992983    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:02.036220    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:02.036246    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:04.552368    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:09.552533    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:09.552632    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:09.564273    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:09.564356    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:09.575523    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:09.575587    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:09.587871    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:09.587938    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:09.598674    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:09.598740    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:09.609489    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:09.609564    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:09.620444    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:09.620527    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:09.632545    9168 logs.go:276] 0 containers: []
	W0807 11:03:09.632555    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:09.632611    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:09.643212    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:09.643230    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:09.643236    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:09.678715    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:09.678728    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:09.694277    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:09.694291    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:09.706214    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:09.706226    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:09.725213    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:09.725225    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:09.746426    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:09.746438    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:09.762303    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:09.762314    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:09.774900    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:09.774912    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:09.779645    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:09.779652    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:09.791999    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:09.792011    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:09.804427    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:09.804438    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:09.816148    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:09.816159    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:09.830550    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:09.830561    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:09.843262    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:09.843273    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:09.868559    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:09.868566    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:09.880587    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:09.880624    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:09.904082    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:09.904096    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:12.447827    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:17.450374    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:17.450511    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:17.463524    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:17.463608    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:17.479952    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:17.480017    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:17.496593    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:17.496672    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:17.507284    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:17.507356    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:17.518093    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:17.518157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:17.529252    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:17.529317    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:17.538972    9168 logs.go:276] 0 containers: []
	W0807 11:03:17.538984    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:17.539042    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:17.549330    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:17.549356    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:17.549362    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:17.591220    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:17.591231    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:17.626668    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:17.626682    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:17.641238    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:17.641250    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:17.659591    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:17.659601    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:17.674374    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:17.674385    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:17.678859    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:17.678868    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:17.693604    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:17.693614    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:17.705233    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:17.705249    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:17.717194    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:17.717205    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:17.728242    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:17.728256    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:17.739482    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:17.739492    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:17.761478    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:17.761487    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:17.774999    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:17.775011    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:17.787260    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:17.787274    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:17.798809    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:17.798820    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:17.813568    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:17.813579    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:20.339823    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:25.342063    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:25.342188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:25.357384    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:25.357466    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:25.369793    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:25.369858    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:25.380740    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:25.380805    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:25.391012    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:25.391079    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:25.402149    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:25.402217    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:25.413038    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:25.413106    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:25.423409    9168 logs.go:276] 0 containers: []
	W0807 11:03:25.423421    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:25.423476    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:25.433758    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:25.433778    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:25.433783    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:25.448209    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:25.448220    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:25.462791    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:25.462801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:25.478909    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:25.478923    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:25.497562    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:25.497576    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:25.509726    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:25.509742    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:25.552614    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:25.552636    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:25.583031    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:25.583044    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:25.601236    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:25.601246    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:25.624229    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:25.624242    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:25.646535    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:25.646543    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:25.658177    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:25.658191    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:25.670135    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:25.670150    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:25.705168    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:25.705179    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:25.716853    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:25.716864    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:25.732451    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:25.732463    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:25.746268    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:25.746280    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:28.252954    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:33.255259    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:33.255560    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:33.287858    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:33.288000    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:33.307412    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:33.307509    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:33.322117    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:33.322199    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:33.334420    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:33.334490    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:33.345070    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:33.345145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:33.355797    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:33.355867    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:33.366072    9168 logs.go:276] 0 containers: []
	W0807 11:03:33.366083    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:33.366140    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:33.377629    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:33.377646    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:33.377655    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:33.395183    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:33.395195    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:33.407055    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:33.407065    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:33.418918    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:33.418927    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:33.460200    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:33.460211    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:33.474319    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:33.474330    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:33.486411    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:33.486423    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:33.501026    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:33.501035    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:33.505422    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:33.505429    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:33.539974    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:33.539984    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:33.551130    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:33.551142    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:33.573375    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:33.573382    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:33.599186    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:33.599196    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:33.613392    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:33.613403    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:33.625432    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:33.625442    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:33.637076    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:33.637086    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:33.653061    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:33.653076    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:36.166688    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:41.167598    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:41.167660    9168 kubeadm.go:597] duration metric: took 4m4.917555333s to restartPrimaryControlPlane
	W0807 11:03:41.167720    9168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0807 11:03:41.167745    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0807 11:03:42.169497    9168 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001747833s)
	I0807 11:03:42.169552    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 11:03:42.174678    9168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:03:42.177639    9168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:03:42.180370    9168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:03:42.180376    9168 kubeadm.go:157] found existing configuration files:
	
	I0807 11:03:42.180399    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf
	I0807 11:03:42.183078    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:03:42.183100    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:03:42.186368    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf
	I0807 11:03:42.189220    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:03:42.189242    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:03:42.191801    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf
	I0807 11:03:42.194919    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:03:42.194942    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:03:42.197917    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf
	I0807 11:03:42.200397    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:03:42.200421    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:03:42.203071    9168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 11:03:42.221016    9168 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0807 11:03:42.221095    9168 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 11:03:42.268888    9168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 11:03:42.268958    9168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 11:03:42.269005    9168 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 11:03:42.318553    9168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 11:03:42.322813    9168 out.go:204]   - Generating certificates and keys ...
	I0807 11:03:42.322847    9168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 11:03:42.322886    9168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 11:03:42.322930    9168 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 11:03:42.322962    9168 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0807 11:03:42.322995    9168 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0807 11:03:42.323021    9168 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0807 11:03:42.323050    9168 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0807 11:03:42.323087    9168 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0807 11:03:42.323121    9168 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 11:03:42.323156    9168 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 11:03:42.323176    9168 kubeadm.go:310] [certs] Using the existing "sa" key
	I0807 11:03:42.323202    9168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 11:03:42.411599    9168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 11:03:42.512812    9168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 11:03:42.662691    9168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 11:03:42.743147    9168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 11:03:42.773453    9168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 11:03:42.774811    9168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 11:03:42.774837    9168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 11:03:42.844379    9168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 11:03:42.848698    9168 out.go:204]   - Booting up control plane ...
	I0807 11:03:42.848749    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 11:03:42.848787    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 11:03:42.848839    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 11:03:42.848886    9168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 11:03:42.848980    9168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 11:03:47.348663    9168 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503128 seconds
	I0807 11:03:47.348740    9168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 11:03:47.351965    9168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 11:03:47.881955    9168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 11:03:47.882350    9168 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-210000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 11:03:48.387143    9168 kubeadm.go:310] [bootstrap-token] Using token: k3wlnb.r7zejrhmlgya4r9l
	I0807 11:03:48.393787    9168 out.go:204]   - Configuring RBAC rules ...
	I0807 11:03:48.393846    9168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 11:03:48.393889    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 11:03:48.400543    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 11:03:48.401690    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 11:03:48.403428    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 11:03:48.404394    9168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 11:03:48.407950    9168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 11:03:48.557151    9168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 11:03:48.791466    9168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 11:03:48.792108    9168 kubeadm.go:310] 
	I0807 11:03:48.792140    9168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 11:03:48.792145    9168 kubeadm.go:310] 
	I0807 11:03:48.792185    9168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 11:03:48.792192    9168 kubeadm.go:310] 
	I0807 11:03:48.792204    9168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 11:03:48.792235    9168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 11:03:48.792262    9168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 11:03:48.792265    9168 kubeadm.go:310] 
	I0807 11:03:48.792287    9168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 11:03:48.792290    9168 kubeadm.go:310] 
	I0807 11:03:48.792311    9168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 11:03:48.792315    9168 kubeadm.go:310] 
	I0807 11:03:48.792338    9168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 11:03:48.792378    9168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 11:03:48.792422    9168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 11:03:48.792425    9168 kubeadm.go:310] 
	I0807 11:03:48.792461    9168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 11:03:48.792500    9168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 11:03:48.792505    9168 kubeadm.go:310] 
	I0807 11:03:48.792546    9168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k3wlnb.r7zejrhmlgya4r9l \
	I0807 11:03:48.792605    9168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d \
	I0807 11:03:48.792622    9168 kubeadm.go:310] 	--control-plane 
	I0807 11:03:48.792625    9168 kubeadm.go:310] 
	I0807 11:03:48.792726    9168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 11:03:48.792736    9168 kubeadm.go:310] 
	I0807 11:03:48.792776    9168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k3wlnb.r7zejrhmlgya4r9l \
	I0807 11:03:48.792825    9168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d 
	I0807 11:03:48.792901    9168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 11:03:48.792911    9168 cni.go:84] Creating CNI manager for ""
	I0807 11:03:48.792922    9168 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:03:48.800849    9168 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 11:03:48.804944    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 11:03:48.807824    9168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 11:03:48.812961    9168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 11:03:48.813003    9168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 11:03:48.813040    9168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-210000 minikube.k8s.io/updated_at=2024_08_07T11_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=running-upgrade-210000 minikube.k8s.io/primary=true
	I0807 11:03:48.847341    9168 kubeadm.go:1113] duration metric: took 34.370333ms to wait for elevateKubeSystemPrivileges
	I0807 11:03:48.847359    9168 ops.go:34] apiserver oom_adj: -16
	I0807 11:03:48.855560    9168 kubeadm.go:394] duration metric: took 4m12.634416417s to StartCluster
	I0807 11:03:48.855583    9168 settings.go:142] acquiring lock: {Name:mk55ff1d0ed65f587ff79ec8ce8fd4d10e83296d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:48.855752    9168 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:03:48.856123    9168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:48.856328    9168 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:03:48.856351    9168 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 11:03:48.856410    9168 config.go:182] Loaded profile config "running-upgrade-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:03:48.856414    9168 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-210000"
	I0807 11:03:48.856431    9168 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-210000"
	W0807 11:03:48.856438    9168 addons.go:243] addon storage-provisioner should already be in state true
	I0807 11:03:48.856437    9168 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-210000"
	I0807 11:03:48.856450    9168 host.go:66] Checking if "running-upgrade-210000" exists ...
	I0807 11:03:48.856461    9168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-210000"
	I0807 11:03:48.857308    9168 kapi.go:59] client config for running-upgrade-210000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10592ff90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:03:48.857432    9168 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-210000"
	W0807 11:03:48.857436    9168 addons.go:243] addon default-storageclass should already be in state true
	I0807 11:03:48.857442    9168 host.go:66] Checking if "running-upgrade-210000" exists ...
	I0807 11:03:48.859787    9168 out.go:177] * Verifying Kubernetes components...
	I0807 11:03:48.860110    9168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 11:03:48.863938    9168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 11:03:48.863946    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 11:03:48.867821    9168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:48.871785    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:48.875880    9168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:03:48.875886    9168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 11:03:48.875892    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 11:03:48.944424    9168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:03:48.949494    9168 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:03:48.949541    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:48.953566    9168 api_server.go:72] duration metric: took 97.226417ms to wait for apiserver process to appear ...
	I0807 11:03:48.953574    9168 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:03:48.953581    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:48.967820    9168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:03:49.014590    9168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 11:03:53.955848    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:53.955952    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:58.956730    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:58.956766    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:03.957361    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:03.957405    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:08.958306    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:08.958327    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:13.959181    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:13.959242    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:18.960749    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:18.960772    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0807 11:04:19.305366    9168 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0807 11:04:19.310714    9168 out.go:177] * Enabled addons: storage-provisioner
	I0807 11:04:19.319596    9168 addons.go:510] duration metric: took 30.4634615s for enable addons: enabled=[storage-provisioner]
	I0807 11:04:23.962448    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:23.962506    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:28.964808    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:28.964854    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:33.965962    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:33.966017    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:38.968180    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:38.968230    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:43.970472    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:43.970524    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:48.972753    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:48.972833    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:48.984698    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:04:48.984771    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:48.996320    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:04:48.996390    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:49.022093    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:04:49.022170    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:49.036654    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:04:49.036728    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:49.048868    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:04:49.048940    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:49.063638    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:04:49.063705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:49.074045    9168 logs.go:276] 0 containers: []
	W0807 11:04:49.074059    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:49.074119    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:49.085224    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:04:49.085238    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:04:49.085243    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:04:49.100480    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:49.100491    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:49.147528    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:04:49.147545    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:04:49.162803    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:04:49.162815    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:04:49.175581    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:04:49.175593    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:04:49.192292    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:04:49.192301    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:04:49.213741    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:04:49.213753    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:04:49.226369    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:49.226381    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:49.249407    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:04:49.249417    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:49.260402    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:49.260413    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:49.295497    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:49.295516    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:49.300331    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:04:49.300339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:04:49.314582    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:04:49.314592    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:04:51.827964    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:56.830431    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:56.830512    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:56.842235    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:04:56.842304    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:56.853958    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:04:56.854027    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:56.865257    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:04:56.865334    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:56.876148    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:04:56.876221    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:56.892084    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:04:56.892158    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:56.903796    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:04:56.903870    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:56.915059    9168 logs.go:276] 0 containers: []
	W0807 11:04:56.915070    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:56.915131    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:56.926205    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:04:56.926221    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:04:56.926227    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:04:56.938681    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:04:56.938694    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:56.951038    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:56.951050    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:56.955961    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:04:56.955968    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:04:56.973835    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:04:56.973846    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:04:56.990604    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:04:56.990615    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:04:57.009157    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:04:57.009175    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:04:57.025596    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:04:57.025609    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:04:57.038684    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:57.038696    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:57.064494    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:57.064508    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:57.102511    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:57.102524    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:57.137093    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:04:57.137104    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:04:57.151765    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:04:57.151776    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:04:59.672401    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:04.673114    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:04.673166    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:04.688053    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:04.688115    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:04.703610    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:04.703677    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:04.719328    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:04.719396    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:04.730635    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:04.730705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:04.745680    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:04.745745    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:04.759994    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:04.760058    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:04.771189    9168 logs.go:276] 0 containers: []
	W0807 11:05:04.771201    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:04.771258    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:04.782763    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:04.782779    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:04.782785    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:04.819895    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:04.819906    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:04.835863    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:04.835875    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:04.855993    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:04.856007    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:04.868836    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:04.868850    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:04.881149    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:04.881166    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:04.919423    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:04.919438    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:04.924659    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:04.924667    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:04.939157    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:04.939173    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:04.951918    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:04.951932    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:04.967573    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:04.967584    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:04.980257    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:04.980269    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:04.998401    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:04.998412    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:07.526682    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:12.528852    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:12.529108    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:12.554108    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:12.554188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:12.570629    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:12.570694    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:12.584518    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:12.584586    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:12.596209    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:12.596296    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:12.606999    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:12.607075    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:12.618087    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:12.618157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:12.630242    9168 logs.go:276] 0 containers: []
	W0807 11:05:12.630254    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:12.630314    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:12.642533    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:12.642550    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:12.642557    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:12.647662    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:12.647674    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:12.662795    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:12.662812    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:12.675230    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:12.675242    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:12.689587    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:12.689602    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:12.701531    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:12.701546    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:12.714064    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:12.714079    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:12.751786    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:12.751800    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:12.790121    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:12.790130    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:12.805243    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:12.805255    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:12.817428    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:12.817441    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:12.833734    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:12.833745    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:12.853995    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:12.854006    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:15.381966    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:20.384261    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:20.384512    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:20.414868    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:20.414967    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:20.433587    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:20.433670    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:20.445766    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:20.445841    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:20.456461    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:20.456529    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:20.466559    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:20.466630    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:20.477613    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:20.477684    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:20.488988    9168 logs.go:276] 0 containers: []
	W0807 11:05:20.489001    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:20.489064    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:20.500579    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:20.500594    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:20.500600    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:20.505428    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:20.505437    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:20.542595    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:20.542604    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:20.555748    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:20.555759    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:20.567828    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:20.567839    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:20.583557    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:20.583572    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:20.603746    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:20.603759    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:20.617048    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:20.617060    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:20.655881    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:20.655894    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:20.668714    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:20.668729    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:20.693679    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:20.693699    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:20.708956    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:20.708968    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:20.731095    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:20.731108    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:23.248963    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:28.251351    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:28.251787    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:28.293014    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:28.293157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:28.315791    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:28.315899    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:28.331253    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:28.331329    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:28.344272    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:28.344339    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:28.358891    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:28.358958    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:28.370382    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:28.370458    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:28.385192    9168 logs.go:276] 0 containers: []
	W0807 11:05:28.385203    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:28.385261    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:28.396600    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:28.396616    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:28.396621    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:28.410838    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:28.410850    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:28.427602    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:28.427611    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:28.442121    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:28.442136    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:28.457545    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:28.457563    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:28.471195    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:28.471208    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:28.483692    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:28.483704    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:28.502594    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:28.502603    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:28.541205    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:28.541226    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:28.546730    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:28.546748    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:28.583931    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:28.583942    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:28.600409    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:28.600422    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:28.614821    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:28.614834    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:31.143488    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:36.145699    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:36.145925    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:36.169700    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:36.169808    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:36.185812    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:36.185891    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:36.199147    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:36.199212    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:36.210129    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:36.210188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:36.220761    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:36.220830    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:36.232040    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:36.232115    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:36.242202    9168 logs.go:276] 0 containers: []
	W0807 11:05:36.242216    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:36.242272    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:36.252301    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:36.252317    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:36.252323    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:36.263999    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:36.264009    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:36.275769    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:36.275781    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:36.312368    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:36.312376    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:36.327448    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:36.327460    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:36.340150    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:36.340162    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:36.356474    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:36.356484    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:36.380850    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:36.380863    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:36.394161    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:36.394172    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:36.421088    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:36.421101    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:36.434283    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:36.434295    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:36.439979    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:36.439991    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:36.485852    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:36.485865    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:39.002789    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:44.005037    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:44.005206    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:44.018867    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:44.018953    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:44.033084    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:44.033153    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:44.043029    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:44.043093    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:44.059767    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:44.059840    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:44.072094    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:44.072167    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:44.086026    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:44.086091    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:44.101399    9168 logs.go:276] 0 containers: []
	W0807 11:05:44.101413    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:44.101468    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:44.111875    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:44.111892    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:44.111897    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:44.117200    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:44.117210    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:44.153210    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:44.153219    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:44.172072    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:44.172084    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:44.186668    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:44.186682    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:44.198209    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:44.198221    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:44.209418    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:44.209430    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:44.233488    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:44.233512    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:44.270928    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:44.270941    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:44.286376    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:44.286394    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:44.299788    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:44.299800    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:44.312568    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:44.312579    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:44.332361    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:44.332378    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:46.846463    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:51.849030    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:51.849222    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:51.865067    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:51.865150    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:51.877036    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:51.877103    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:51.887636    9168 logs.go:276] 3 containers: [0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:05:51.887708    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:51.898375    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:51.898442    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:51.909296    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:51.909364    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:51.919843    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:51.919914    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:51.929950    9168 logs.go:276] 0 containers: []
	W0807 11:05:51.929962    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:51.930023    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:51.940735    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:51.940751    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:51.940756    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:51.955357    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:51.955370    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:51.967959    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:05:51.967970    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:05:51.980099    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:51.980112    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:51.993904    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:51.993918    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:52.012916    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:52.012930    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:52.024721    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:52.024731    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:52.047843    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:52.047851    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:52.059024    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:52.059035    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:52.063637    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:52.063646    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:52.077554    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:52.077564    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:52.088998    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:52.089012    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:52.110112    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:52.110125    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:52.147364    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:52.147377    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:54.688241    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:59.690859    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:59.691133    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:59.717187    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:59.717308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:59.734100    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:59.734178    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:59.747553    9168 logs.go:276] 3 containers: [0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:05:59.747624    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:59.760669    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:59.760733    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:59.771035    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:59.771089    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:59.782713    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:59.782780    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:59.793127    9168 logs.go:276] 0 containers: []
	W0807 11:05:59.793137    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:59.793188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:59.803748    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:59.803762    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:59.803767    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:59.840164    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:59.840172    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:59.851704    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:59.851713    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:59.869981    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:59.869991    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:59.882292    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:59.882303    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:59.887077    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:59.887083    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:59.907141    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:59.907152    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:59.924121    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:59.924131    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:59.935670    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:59.935680    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:59.958977    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:59.958988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:59.971665    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:59.971679    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:00.005603    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:00.005617    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:00.017394    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:00.017407    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:00.029066    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:00.029079    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:02.549123    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:07.551517    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:07.551705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:07.574717    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:07.574822    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:07.591055    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:07.591145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:07.603933    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:07.604006    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:07.615365    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:07.615438    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:07.626401    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:07.626472    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:07.642444    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:07.642523    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:07.652968    9168 logs.go:276] 0 containers: []
	W0807 11:06:07.652981    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:07.653042    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:07.663412    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:07.663429    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:07.663434    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:07.678880    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:07.678894    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:07.704188    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:07.704200    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:07.721276    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:07.721286    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:07.733989    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:07.734002    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:07.771827    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:07.771840    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:07.784711    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:07.784722    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:07.796408    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:07.796422    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:07.810695    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:07.810704    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:07.825498    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:07.825513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:07.837169    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:07.837179    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:07.848791    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:07.848801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:07.860211    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:07.860221    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:07.895487    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:07.895495    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:07.899794    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:07.899801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:10.413424    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:15.415821    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:15.416225    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:15.452916    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:15.453056    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:15.475360    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:15.475467    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:15.490304    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:15.490383    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:15.506890    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:15.506962    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:15.517994    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:15.518066    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:15.528789    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:15.528858    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:15.539084    9168 logs.go:276] 0 containers: []
	W0807 11:06:15.539094    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:15.539151    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:15.549780    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:15.549798    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:15.549803    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:15.554617    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:15.554628    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:15.569616    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:15.569626    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:15.606747    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:15.606761    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:15.620788    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:15.620800    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:15.635095    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:15.635105    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:15.646538    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:15.646553    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:15.661970    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:15.661983    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:15.673504    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:15.673514    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:15.697719    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:15.697726    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:15.720343    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:15.720353    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:15.761354    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:15.761365    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:15.772885    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:15.772897    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:15.784311    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:15.784324    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:15.797107    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:15.797118    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:18.317524    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:23.320042    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:23.320213    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:23.332817    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:23.332897    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:23.343802    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:23.343863    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:23.354811    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:23.354878    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:23.368218    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:23.368275    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:23.379571    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:23.379635    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:23.399629    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:23.399697    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:23.411311    9168 logs.go:276] 0 containers: []
	W0807 11:06:23.411325    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:23.411383    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:23.426376    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:23.426394    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:23.426399    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:23.440489    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:23.440500    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:23.452065    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:23.452076    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:23.486708    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:23.486718    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:23.490856    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:23.490864    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:23.504879    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:23.504888    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:23.520043    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:23.520054    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:23.531857    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:23.531869    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:23.543095    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:23.543105    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:23.561985    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:23.561998    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:23.586071    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:23.586077    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:23.622414    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:23.622427    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:23.634797    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:23.634810    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:23.650330    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:23.650339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:23.665484    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:23.665495    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:26.179140    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:31.177118    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:31.177325    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:31.207836    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:31.207921    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:31.221682    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:31.221745    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:31.232868    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:31.232939    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:31.243682    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:31.243740    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:31.254231    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:31.254294    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:31.264590    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:31.264657    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:31.274951    9168 logs.go:276] 0 containers: []
	W0807 11:06:31.274963    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:31.275016    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:31.289783    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:31.289800    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:31.289806    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:31.302748    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:31.302761    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:31.317685    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:31.317696    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:31.329309    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:31.329323    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:31.333764    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:31.333771    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:31.347976    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:31.347986    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:31.360050    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:31.360062    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:31.371967    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:31.371977    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:31.389779    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:31.389792    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:31.415529    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:31.415538    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:31.430425    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:31.430437    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:31.442042    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:31.442054    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:31.477943    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:31.477951    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:31.543998    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:31.544012    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:31.556363    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:31.556374    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:34.066814    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:39.063903    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:39.064065    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:39.079697    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:39.079771    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:39.090386    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:39.090450    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:39.101053    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:39.101127    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:39.114484    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:39.114556    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:39.124528    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:39.124589    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:39.135370    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:39.135434    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:39.145847    9168 logs.go:276] 0 containers: []
	W0807 11:06:39.145858    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:39.145911    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:39.156068    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:39.156086    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:39.156093    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:39.174814    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:39.174827    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:39.189694    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:39.189704    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:39.201193    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:39.201206    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:39.235613    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:39.235621    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:39.249755    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:39.249768    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:39.261349    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:39.261361    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:39.278687    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:39.278699    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:39.289984    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:39.289999    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:39.314832    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:39.314839    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:39.319520    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:39.319530    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:39.334255    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:39.334268    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:39.346195    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:39.346204    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:39.383071    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:39.383082    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:39.394528    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:39.394541    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:41.906648    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:46.906478    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:46.906725    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:46.932188    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:46.932308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:46.953262    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:46.953336    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:46.967618    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:46.967696    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:46.979246    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:46.979317    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:46.990186    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:46.990248    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:47.001799    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:47.001860    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:47.012425    9168 logs.go:276] 0 containers: []
	W0807 11:06:47.012436    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:47.012487    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:47.025092    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:47.025111    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:47.025117    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:47.036842    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:47.036856    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:47.051678    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:47.051688    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:47.065519    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:47.065532    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:47.077739    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:47.077749    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:47.091741    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:47.091753    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:47.107313    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:47.107324    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:47.125160    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:47.125170    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:47.136579    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:47.136589    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:47.172980    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:47.172988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:47.177965    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:47.177973    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:47.213872    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:47.213882    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:47.228764    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:47.228774    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:47.249322    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:47.249334    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:47.268265    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:47.268274    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:49.795058    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:54.795959    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:54.796336    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:54.829390    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:54.829523    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:54.849250    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:54.849347    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:54.863756    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:54.863834    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:54.877029    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:54.877101    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:54.887709    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:54.887776    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:54.898538    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:54.898610    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:54.909359    9168 logs.go:276] 0 containers: []
	W0807 11:06:54.909369    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:54.909429    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:54.920652    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:54.920668    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:54.920673    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:54.932544    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:54.932555    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:54.957400    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:54.957411    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:54.970878    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:54.970889    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:54.975514    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:54.975523    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:54.998646    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:54.998656    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:55.013157    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:55.013169    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:55.031016    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:55.031026    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:55.065412    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:55.065420    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:55.078922    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:55.078934    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:55.090929    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:55.090940    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:55.112739    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:55.112755    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:55.124825    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:55.124840    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:55.136231    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:55.136244    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:55.151660    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:55.151670    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:57.690857    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:02.692018    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:02.692215    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:02.711123    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:02.711219    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:02.726571    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:02.726649    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:02.739301    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:02.739381    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:02.749689    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:02.749757    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:02.759720    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:02.759781    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:02.769847    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:02.769919    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:02.780099    9168 logs.go:276] 0 containers: []
	W0807 11:07:02.780114    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:02.780176    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:02.792988    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:02.793005    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:02.793009    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:02.805205    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:02.805217    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:02.817301    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:02.817313    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:02.833259    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:02.833271    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:02.851106    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:02.851117    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:02.875652    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:02.875663    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:02.887732    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:02.887743    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:02.923252    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:02.923260    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:02.962449    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:02.962462    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:02.980312    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:02.980323    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:02.985097    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:02.985103    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:02.997554    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:02.997564    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:03.008939    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:03.008950    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:03.023177    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:03.023186    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:03.035061    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:03.035072    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:05.551681    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:10.553671    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:10.553908    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:10.572392    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:10.572486    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:10.586230    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:10.586311    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:10.599832    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:10.599909    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:10.610316    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:10.610379    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:10.620898    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:10.620968    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:10.631075    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:10.631145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:10.644636    9168 logs.go:276] 0 containers: []
	W0807 11:07:10.644649    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:10.644704    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:10.654776    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:10.654796    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:10.654802    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:10.691699    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:10.691710    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:10.705240    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:10.705250    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:10.716341    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:10.716353    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:10.728273    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:10.728285    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:10.740510    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:10.740520    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:10.755232    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:10.755243    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:10.766986    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:10.766998    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:10.778566    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:10.778575    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:10.818802    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:10.818812    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:10.834712    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:10.834720    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:10.859783    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:10.859793    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:10.864342    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:10.864351    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:10.879167    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:10.879181    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:10.893059    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:10.893071    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:13.413051    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:18.414811    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:18.414935    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:18.426296    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:18.426372    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:18.438178    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:18.438248    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:18.448932    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:18.448999    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:18.459833    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:18.459903    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:18.470169    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:18.470247    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:18.480398    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:18.480463    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:18.490941    9168 logs.go:276] 0 containers: []
	W0807 11:07:18.490953    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:18.491011    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:18.501744    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:18.501760    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:18.501766    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:18.507326    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:18.507339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:18.527991    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:18.528007    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:18.548670    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:18.548683    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:18.567534    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:18.567550    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:18.611041    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:18.611057    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:18.624870    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:18.624884    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:18.638244    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:18.638256    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:18.649854    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:18.649866    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:18.686114    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:18.686133    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:18.704683    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:18.704692    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:18.716772    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:18.716785    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:18.729839    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:18.729852    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:18.749739    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:18.749754    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:18.762906    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:18.762918    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:21.289248    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:26.291181    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:26.291286    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:26.306200    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:26.306274    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:26.316779    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:26.316855    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:26.327566    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:26.327637    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:26.338438    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:26.338509    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:26.349068    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:26.349143    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:26.359967    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:26.360043    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:26.370958    9168 logs.go:276] 0 containers: []
	W0807 11:07:26.370967    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:26.371021    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:26.381329    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:26.381346    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:26.381352    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:26.399102    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:26.399112    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:26.412246    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:26.412256    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:26.423931    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:26.423941    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:26.435309    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:26.435321    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:26.446649    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:26.446661    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:26.458876    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:26.458887    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:26.481888    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:26.481895    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:26.516222    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:26.516228    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:26.551369    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:26.551381    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:26.565647    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:26.565655    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:26.576850    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:26.576859    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:26.588554    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:26.588565    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:26.594393    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:26.594400    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:26.606106    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:26.606116    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:29.122483    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:34.124458    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:34.124575    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:34.139825    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:34.139909    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:34.153217    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:34.153276    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:34.164064    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:34.164135    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:34.174706    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:34.174776    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:34.185783    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:34.185845    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:34.196958    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:34.197019    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:34.206585    9168 logs.go:276] 0 containers: []
	W0807 11:07:34.206595    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:34.206641    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:34.216869    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:34.216886    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:34.216891    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:34.253527    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:34.253537    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:34.264791    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:34.264802    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:34.276461    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:34.276470    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:34.291563    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:34.291574    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:34.296712    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:34.296722    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:34.310954    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:34.310966    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:34.323729    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:34.323742    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:34.335500    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:34.335512    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:34.373044    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:34.373053    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:34.391033    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:34.391047    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:34.402862    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:34.402872    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:34.416885    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:34.416896    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:34.428916    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:34.428925    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:34.440776    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:34.440786    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:36.966977    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:41.969029    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:41.969155    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:41.981825    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:41.981907    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:41.992886    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:41.992953    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:42.003918    9168 logs.go:276] 4 containers: [51d9bb212e49 4ea04b21f860 0e834a4d33b2 6a43bd083386]
	I0807 11:07:42.003994    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:42.014982    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:42.015054    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:42.025538    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:42.025606    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:42.044261    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:42.044333    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:42.055122    9168 logs.go:276] 0 containers: []
	W0807 11:07:42.055136    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:42.055189    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:42.069042    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:42.069059    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:42.069063    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:42.083679    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:42.083688    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:42.108768    9168 logs.go:123] Gathering logs for coredns [51d9bb212e49] ...
	I0807 11:07:42.108779    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d9bb212e49"
	I0807 11:07:42.120182    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:42.120201    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:42.131696    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:42.131706    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:42.143619    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:42.143631    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:42.155504    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:42.155515    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:42.191929    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:42.191936    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:42.196743    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:42.196750    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:42.231911    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:42.231923    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:42.246184    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:42.246192    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:42.260160    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:42.260172    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:42.274200    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:42.274209    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:42.285807    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:42.285822    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:42.303690    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:42.303700    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:44.817037    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:49.819165    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:49.824829    9168 out.go:177] 
	W0807 11:07:49.828877    9168 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0807 11:07:49.828890    9168 out.go:239] * 
	* 
	W0807 11:07:49.829853    9168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:07:49.840628    9168 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-210000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-07 11:07:49.943146 -0700 PDT m=+1322.567574293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-210000 -n running-upgrade-210000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-210000 -n running-upgrade-210000: exit status 2 (15.5587155s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-210000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-880000          | force-systemd-flag-880000 | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-875000              | force-systemd-env-875000  | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-875000           | force-systemd-env-875000  | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT | 07 Aug 24 10:58 PDT |
	| start   | -p docker-flags-198000                | docker-flags-198000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-880000             | force-systemd-flag-880000 | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-880000          | force-systemd-flag-880000 | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT | 07 Aug 24 10:58 PDT |
	| start   | -p cert-expiration-081000             | cert-expiration-081000    | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-198000 ssh               | docker-flags-198000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-198000 ssh               | docker-flags-198000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-198000                | docker-flags-198000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT | 07 Aug 24 10:58 PDT |
	| start   | -p cert-options-891000                | cert-options-891000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-891000 ssh               | cert-options-891000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-891000 -- sudo        | cert-options-891000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-891000                | cert-options-891000       | jenkins | v1.33.1 | 07 Aug 24 10:58 PDT | 07 Aug 24 10:58 PDT |
	| start   | -p running-upgrade-210000             | minikube                  | jenkins | v1.26.0 | 07 Aug 24 10:58 PDT | 07 Aug 24 10:59 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-210000             | running-upgrade-210000    | jenkins | v1.33.1 | 07 Aug 24 10:59 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-081000             | cert-expiration-081000    | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-081000             | cert-expiration-081000    | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT | 07 Aug 24 11:01 PDT |
	| start   | -p kubernetes-upgrade-465000          | kubernetes-upgrade-465000 | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-465000          | kubernetes-upgrade-465000 | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT | 07 Aug 24 11:01 PDT |
	| start   | -p kubernetes-upgrade-465000          | kubernetes-upgrade-465000 | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-465000          | kubernetes-upgrade-465000 | jenkins | v1.33.1 | 07 Aug 24 11:01 PDT | 07 Aug 24 11:01 PDT |
	| start   | -p stopped-upgrade-423000             | minikube                  | jenkins | v1.26.0 | 07 Aug 24 11:01 PDT | 07 Aug 24 11:02 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-423000 stop           | minikube                  | jenkins | v1.26.0 | 07 Aug 24 11:02 PDT | 07 Aug 24 11:02 PDT |
	| start   | -p stopped-upgrade-423000             | stopped-upgrade-423000    | jenkins | v1.33.1 | 07 Aug 24 11:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 11:02:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 11:02:38.767216    9637 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:02:38.767362    9637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:02:38.767375    9637 out.go:304] Setting ErrFile to fd 2...
	I0807 11:02:38.767378    9637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:02:38.767544    9637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:02:38.768838    9637 out.go:298] Setting JSON to false
	I0807 11:02:38.787763    9637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5527,"bootTime":1723048231,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:02:38.787830    9637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:02:38.792598    9637 out.go:177] * [stopped-upgrade-423000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:02:38.800503    9637 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:02:38.800555    9637 notify.go:220] Checking for updates...
	I0807 11:02:38.807555    9637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:02:38.810574    9637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:02:38.813582    9637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:02:38.816610    9637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:02:38.819612    9637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:02:38.822826    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:02:38.825534    9637 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0807 11:02:38.828590    9637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:02:38.832530    9637 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:02:38.839449    9637 start.go:297] selected driver: qemu2
	I0807 11:02:38.839455    9637 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:02:38.839507    9637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:02:38.842322    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:02:38.842340    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:02:38.842376    9637 start.go:340] cluster config:
	{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:02:38.842439    9637 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:02:38.849540    9637 out.go:177] * Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	I0807 11:02:38.853666    9637 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 11:02:38.853682    9637 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0807 11:02:38.853689    9637 cache.go:56] Caching tarball of preloaded images
	I0807 11:02:38.853748    9637 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:02:38.853753    9637 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0807 11:02:38.853807    9637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0807 11:02:38.854270    9637 start.go:360] acquireMachinesLock for stopped-upgrade-423000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:02:38.854307    9637 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "stopped-upgrade-423000"
	I0807 11:02:38.854315    9637 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:02:38.854323    9637 fix.go:54] fixHost starting: 
	I0807 11:02:38.854435    9637 fix.go:112] recreateIfNeeded on stopped-upgrade-423000: state=Stopped err=<nil>
	W0807 11:02:38.854443    9637 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:02:38.858570    9637 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	I0807 11:02:38.010758    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:38.010881    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:38.022798    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:38.022866    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:38.033504    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:38.033575    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:38.044064    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:38.044130    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:38.055882    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:38.055976    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:38.066931    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:38.067016    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:38.077731    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:38.077804    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:38.092055    9168 logs.go:276] 0 containers: []
	W0807 11:02:38.092066    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:38.092127    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:38.103156    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:38.103175    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:38.103181    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:38.108039    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:38.108047    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:38.144319    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:38.144332    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:38.158222    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:38.158233    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:38.170699    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:38.170711    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:38.181641    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:38.181652    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:38.193855    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:38.193866    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:38.207664    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:38.207675    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:38.225498    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:38.225507    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:38.237142    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:38.237155    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:38.254054    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:38.254063    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:38.266973    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:38.266987    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:38.291623    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:38.291635    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:38.303553    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:38.303565    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:38.344684    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:38.344693    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:38.362660    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:38.362673    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:38.373837    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:38.373847    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:40.887310    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:38.862507    9637 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:02:38.862571    9637 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51441-:22,hostfwd=tcp::51442-:2376,hostname=stopped-upgrade-423000 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/disk.qcow2
	I0807 11:02:38.910174    9637 main.go:141] libmachine: STDOUT: 
	I0807 11:02:38.910202    9637 main.go:141] libmachine: STDERR: 
	I0807 11:02:38.910207    9637 main.go:141] libmachine: Waiting for VM to start (ssh -p 51441 docker@127.0.0.1)...
	I0807 11:02:45.888808    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:45.889021    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:45.900462    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:45.900539    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:45.912171    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:45.912241    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:45.923108    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:45.923176    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:45.933753    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:45.933815    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:45.944252    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:45.944308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:45.955580    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:45.955651    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:45.966136    9168 logs.go:276] 0 containers: []
	W0807 11:02:45.966148    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:45.966206    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:45.976545    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:45.976566    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:45.976571    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:45.987925    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:45.987936    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:45.999410    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:45.999424    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:46.013845    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:46.013854    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:46.025498    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:46.025507    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:46.046785    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:46.046795    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:46.062342    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:46.062353    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:46.074306    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:46.074317    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:46.078894    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:46.078900    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:46.092502    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:46.092513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:46.104269    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:46.104279    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:46.129319    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:46.129326    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:46.141301    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:46.141313    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:46.181340    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:46.181347    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:46.216979    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:46.216990    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:46.231076    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:46.231088    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:46.245401    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:46.245412    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:48.768810    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:53.771118    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:02:53.771220    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:02:53.785339    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:02:53.785411    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:02:53.799233    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:02:53.799300    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:02:53.809784    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:02:53.809849    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:02:53.820640    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:02:53.820709    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:02:53.831666    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:02:53.831729    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:02:53.843033    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:02:53.843099    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:02:53.853187    9168 logs.go:276] 0 containers: []
	W0807 11:02:53.853199    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:02:53.853257    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:02:53.863939    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:02:53.863959    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:02:53.863965    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:02:53.881384    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:02:53.881395    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:02:53.896087    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:02:53.896097    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:02:53.913484    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:02:53.913494    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:02:53.925131    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:02:53.925140    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:02:53.936764    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:02:53.936775    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:02:53.948010    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:02:53.948025    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:02:53.961975    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:02:53.961988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:02:54.001089    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:02:54.001101    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:02:54.012599    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:02:54.012610    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:02:54.024032    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:02:54.024044    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:02:54.047991    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:02:54.048000    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:02:54.052212    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:02:54.052218    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:02:54.069636    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:02:54.069646    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:02:54.081650    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:02:54.081666    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:02:54.092468    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:02:54.092478    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:02:54.130729    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:02:54.130737    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:02:56.644368    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:02:59.205696    9637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0807 11:02:59.206526    9637 machine.go:94] provisionDockerMachine start ...
	I0807 11:02:59.206677    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.207146    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.207161    9637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 11:02:59.298813    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 11:02:59.298840    9637 buildroot.go:166] provisioning hostname "stopped-upgrade-423000"
	I0807 11:02:59.298949    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.299182    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.299194    9637 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-423000 && echo "stopped-upgrade-423000" | sudo tee /etc/hostname
	I0807 11:02:59.389865    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-423000
	
	I0807 11:02:59.389954    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.390139    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.390153    9637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-423000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-423000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-423000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 11:02:59.471298    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 11:02:59.471318    9637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19389-6671/.minikube CaCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19389-6671/.minikube}
	I0807 11:02:59.471327    9637 buildroot.go:174] setting up certificates
	I0807 11:02:59.471335    9637 provision.go:84] configureAuth start
	I0807 11:02:59.471345    9637 provision.go:143] copyHostCerts
	I0807 11:02:59.471432    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem, removing ...
	I0807 11:02:59.471441    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem
	I0807 11:02:59.471549    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem (1082 bytes)
	I0807 11:02:59.471743    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem, removing ...
	I0807 11:02:59.471747    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem
	I0807 11:02:59.471801    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem (1123 bytes)
	I0807 11:02:59.471925    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem, removing ...
	I0807 11:02:59.471929    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem
	I0807 11:02:59.471979    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem (1675 bytes)
	I0807 11:02:59.472072    9637 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-423000 san=[127.0.0.1 localhost minikube stopped-upgrade-423000]
	I0807 11:02:59.555516    9637 provision.go:177] copyRemoteCerts
	I0807 11:02:59.555562    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 11:02:59.555571    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:02:59.593937    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 11:02:59.601042    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0807 11:02:59.607510    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 11:02:59.614131    9637 provision.go:87] duration metric: took 142.791583ms to configureAuth
	I0807 11:02:59.614140    9637 buildroot.go:189] setting minikube options for container-runtime
	I0807 11:02:59.614252    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:02:59.614290    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.614375    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.614382    9637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 11:02:59.685268    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 11:02:59.685279    9637 buildroot.go:70] root file system type: tmpfs
	I0807 11:02:59.685334    9637 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 11:02:59.685397    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.685523    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.685559    9637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 11:02:59.760832    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 11:02:59.760895    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.761007    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.761015    9637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 11:03:00.133339    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 11:03:00.133353    9637 machine.go:97] duration metric: took 926.822792ms to provisionDockerMachine
	I0807 11:03:00.133360    9637 start.go:293] postStartSetup for "stopped-upgrade-423000" (driver="qemu2")
	I0807 11:03:00.133366    9637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 11:03:00.133417    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 11:03:00.133427    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:03:00.171921    9637 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 11:03:00.173193    9637 info.go:137] Remote host: Buildroot 2021.02.12
	I0807 11:03:00.173200    9637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/addons for local assets ...
	I0807 11:03:00.173289    9637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/files for local assets ...
	I0807 11:03:00.173415    9637 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem -> 71662.pem in /etc/ssl/certs
	I0807 11:03:00.173563    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 11:03:00.176258    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /etc/ssl/certs/71662.pem (1708 bytes)
	I0807 11:03:00.182635    9637 start.go:296] duration metric: took 49.270667ms for postStartSetup
	I0807 11:03:00.182649    9637 fix.go:56] duration metric: took 21.328482417s for fixHost
	I0807 11:03:00.182681    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:03:00.182794    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:03:00.182801    9637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 11:03:00.258449    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053780.549436296
	
	I0807 11:03:00.258458    9637 fix.go:216] guest clock: 1723053780.549436296
	I0807 11:03:00.258462    9637 fix.go:229] Guest: 2024-08-07 11:03:00.549436296 -0700 PDT Remote: 2024-08-07 11:03:00.182651 -0700 PDT m=+21.444152959 (delta=366.785296ms)
	I0807 11:03:00.258479    9637 fix.go:200] guest clock delta is within tolerance: 366.785296ms
	I0807 11:03:00.258482    9637 start.go:83] releasing machines lock for "stopped-upgrade-423000", held for 21.404324458s
	I0807 11:03:00.258551    9637 ssh_runner.go:195] Run: cat /version.json
	I0807 11:03:00.258560    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:03:00.258551    9637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 11:03:00.258624    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	W0807 11:03:00.259151    9637 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51441: connect: connection refused
	I0807 11:03:00.259172    9637 retry.go:31] will retry after 206.590189ms: dial tcp [::1]:51441: connect: connection refused
	W0807 11:03:00.295526    9637 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0807 11:03:00.295576    9637 ssh_runner.go:195] Run: systemctl --version
	I0807 11:03:00.297384    9637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 11:03:00.298866    9637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 11:03:00.298889    9637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0807 11:03:00.301911    9637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0807 11:03:00.306586    9637 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 11:03:00.306598    9637 start.go:495] detecting cgroup driver to use...
	I0807 11:03:00.306673    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 11:03:00.314049    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0807 11:03:00.317587    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 11:03:00.320906    9637 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 11:03:00.320933    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 11:03:00.323713    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 11:03:00.326573    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 11:03:00.329953    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 11:03:00.333317    9637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 11:03:00.336447    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 11:03:00.339290    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 11:03:00.347597    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 11:03:00.352069    9637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 11:03:00.354797    9637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 11:03:00.357619    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:00.432382    9637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 11:03:00.437998    9637 start.go:495] detecting cgroup driver to use...
	I0807 11:03:00.438072    9637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 11:03:00.443886    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 11:03:00.449137    9637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 11:03:00.456601    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 11:03:00.461253    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 11:03:00.465948    9637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 11:03:00.488745    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 11:03:00.493040    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 11:03:00.498638    9637 ssh_runner.go:195] Run: which cri-dockerd
	I0807 11:03:00.499865    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 11:03:00.502494    9637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 11:03:00.508883    9637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 11:03:00.592384    9637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 11:03:00.656697    9637 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 11:03:00.656766    9637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 11:03:00.661999    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:00.737284    9637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 11:03:01.852786    9637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.115492667s)
	I0807 11:03:01.852849    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 11:03:01.857800    9637 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0807 11:03:01.865915    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 11:03:01.871169    9637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 11:03:01.944635    9637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 11:03:02.004356    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:02.071563    9637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 11:03:02.077626    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 11:03:02.082404    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:02.140879    9637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 11:03:02.179784    9637 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 11:03:02.179858    9637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 11:03:02.182601    9637 start.go:563] Will wait 60s for crictl version
	I0807 11:03:02.182669    9637 ssh_runner.go:195] Run: which crictl
	I0807 11:03:02.184116    9637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 11:03:02.197920    9637 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0807 11:03:02.197986    9637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 11:03:02.214404    9637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 11:03:02.235572    9637 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0807 11:03:02.235657    9637 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0807 11:03:02.236969    9637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 11:03:02.240839    9637 kubeadm.go:883] updating cluster {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0807 11:03:02.240881    9637 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 11:03:02.240923    9637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 11:03:02.251470    9637 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 11:03:02.251478    9637 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 11:03:02.251525    9637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 11:03:02.254603    9637 ssh_runner.go:195] Run: which lz4
	I0807 11:03:02.255899    9637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 11:03:02.257050    9637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 11:03:02.257058    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0807 11:03:03.174358    9637 docker.go:649] duration metric: took 918.497167ms to copy over tarball
	I0807 11:03:03.174423    9637 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 11:03:01.647144    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:01.647305    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:01.663991    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:01.664071    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:01.677760    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:01.677831    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:01.692232    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:01.692302    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:01.703486    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:01.703551    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:01.714596    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:01.714661    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:01.729048    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:01.729109    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:01.738928    9168 logs.go:276] 0 containers: []
	W0807 11:03:01.738939    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:01.738990    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:01.750137    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:01.750157    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:01.750163    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:01.762272    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:01.762286    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:01.767667    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:01.767680    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:01.782338    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:01.782349    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:01.795521    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:01.795544    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:01.808425    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:01.808437    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:01.823537    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:01.823553    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:01.835498    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:01.835513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:01.849020    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:01.849032    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:01.873841    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:01.873855    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:01.913441    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:01.913453    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:01.925913    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:01.925926    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:01.948375    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:01.948385    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:01.966295    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:01.966311    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:01.980833    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:01.980844    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:01.992970    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:01.992983    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:02.036220    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:02.036246    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:04.552368    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:04.340385    9637 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.165957625s)
	I0807 11:03:04.340398    9637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 11:03:04.355530    9637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 11:03:04.358505    9637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0807 11:03:04.363314    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:04.446509    9637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 11:03:05.611464    9637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16494575s)
	I0807 11:03:05.611550    9637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 11:03:05.624573    9637 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 11:03:05.624582    9637 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 11:03:05.624587    9637 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0807 11:03:05.629979    9637 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:05.631934    9637 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:05.633568    9637 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:05.633645    9637 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:05.635548    9637 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:05.635752    9637 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:05.636785    9637 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:05.637188    9637 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:05.638550    9637 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:05.638551    9637 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:05.639970    9637 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:05.639973    9637 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0807 11:03:05.641145    9637 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:05.641405    9637 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:05.642330    9637 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0807 11:03:05.643404    9637 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.071424    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.072137    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.077246    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.084142    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.087260    9637 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0807 11:03:06.087281    9637 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.087287    9637 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0807 11:03:06.087299    9637 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.087331    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.087331    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.093264    9637 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0807 11:03:06.093286    9637 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.093340    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.097121    9637 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0807 11:03:06.097144    9637 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.097127    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.097171    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.113867    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0807 11:03:06.117428    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0807 11:03:06.117440    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0807 11:03:06.123492    9637 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0807 11:03:06.123628    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.136500    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0807 11:03:06.136510    9637 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0807 11:03:06.136525    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0807 11:03:06.136527    9637 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.136568    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.136567    9637 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0807 11:03:06.136579    9637 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0807 11:03:06.136600    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0807 11:03:06.143085    9637 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0807 11:03:06.143106    9637 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.143157    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.155680    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0807 11:03:06.155686    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0807 11:03:06.155708    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0807 11:03:06.155797    9637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0807 11:03:06.155798    9637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0807 11:03:06.155799    9637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0807 11:03:06.158098    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0807 11:03:06.158109    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0807 11:03:06.158146    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0807 11:03:06.158154    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0807 11:03:06.158159    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0807 11:03:06.158172    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0807 11:03:06.192815    9637 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0807 11:03:06.192834    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0807 11:03:06.253482    9637 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0807 11:03:06.253593    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.283932    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0807 11:03:06.283960    9637 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0807 11:03:06.283968    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0807 11:03:06.292153    9637 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0807 11:03:06.292178    9637 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.292236    9637 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.378457    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0807 11:03:06.378468    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0807 11:03:06.378587    9637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0807 11:03:06.391375    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0807 11:03:06.391409    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0807 11:03:06.455062    9637 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0807 11:03:06.455077    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0807 11:03:06.813858    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0807 11:03:06.813881    9637 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0807 11:03:06.813889    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0807 11:03:06.953703    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0807 11:03:06.953743    9637 cache_images.go:92] duration metric: took 1.32915825s to LoadCachedImages
	W0807 11:03:06.953786    9637 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0807 11:03:06.953792    9637 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0807 11:03:06.953841    9637 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-423000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 11:03:06.953913    9637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 11:03:06.968550    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:03:06.968562    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:03:06.968566    9637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 11:03:06.968574    9637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-423000 NodeName:stopped-upgrade-423000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 11:03:06.968639    9637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-423000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 11:03:06.968701    9637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0807 11:03:06.971381    9637 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 11:03:06.971408    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 11:03:06.974082    9637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0807 11:03:06.978928    9637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 11:03:06.985255    9637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0807 11:03:06.990825    9637 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0807 11:03:06.992176    9637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 11:03:06.995582    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:07.057135    9637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:03:07.064131    9637 certs.go:68] Setting up /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000 for IP: 10.0.2.15
	I0807 11:03:07.064140    9637 certs.go:194] generating shared ca certs ...
	I0807 11:03:07.064148    9637 certs.go:226] acquiring lock for ca certs: {Name:mkf594adfb50ee91964d2e538bbb4ff47398b8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.064361    9637 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key
	I0807 11:03:07.064415    9637 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key
	I0807 11:03:07.064424    9637 certs.go:256] generating profile certs ...
	I0807 11:03:07.064499    9637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key
	I0807 11:03:07.064517    9637 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81
	I0807 11:03:07.064528    9637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0807 11:03:07.123319    9637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 ...
	I0807 11:03:07.123346    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81: {Name:mkd86c55b851f33026777198b4f1c97f247eadad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.123676    9637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 ...
	I0807 11:03:07.123682    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81: {Name:mka4e55d19e716cda36b012f8d3e655d682732c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.123823    9637 certs.go:381] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt
	I0807 11:03:07.127992    9637 certs.go:385] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key
	I0807 11:03:07.128179    9637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.key
	I0807 11:03:07.128318    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem (1338 bytes)
	W0807 11:03:07.128356    9637 certs.go:480] ignoring /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166_empty.pem, impossibly tiny 0 bytes
	I0807 11:03:07.128362    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem (1675 bytes)
	I0807 11:03:07.128381    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem (1082 bytes)
	I0807 11:03:07.128401    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem (1123 bytes)
	I0807 11:03:07.128421    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem (1675 bytes)
	I0807 11:03:07.128469    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem (1708 bytes)
	I0807 11:03:07.128809    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 11:03:07.135691    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 11:03:07.142528    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 11:03:07.149595    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 11:03:07.156314    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0807 11:03:07.163043    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 11:03:07.169604    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 11:03:07.176650    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 11:03:07.184483    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /usr/share/ca-certificates/71662.pem (1708 bytes)
	I0807 11:03:07.191299    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 11:03:07.198159    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem --> /usr/share/ca-certificates/7166.pem (1338 bytes)
	I0807 11:03:07.205261    9637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 11:03:07.210654    9637 ssh_runner.go:195] Run: openssl version
	I0807 11:03:07.212619    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71662.pem && ln -fs /usr/share/ca-certificates/71662.pem /etc/ssl/certs/71662.pem"
	I0807 11:03:07.215496    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.216862    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:47 /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.216880    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.218531    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71662.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 11:03:07.222087    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 11:03:07.225668    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.227153    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.227176    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.228911    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 11:03:07.231649    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7166.pem && ln -fs /usr/share/ca-certificates/7166.pem /etc/ssl/certs/7166.pem"
	I0807 11:03:07.234691    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.236230    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:47 /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.236253    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.237920    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7166.pem /etc/ssl/certs/51391683.0"
	I0807 11:03:07.241103    9637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 11:03:07.242539    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 11:03:07.245230    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 11:03:07.247426    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 11:03:07.249514    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 11:03:07.251522    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 11:03:07.253158    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 11:03:07.255013    9637 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:03:07.255081    9637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 11:03:07.265610    9637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 11:03:07.269145    9637 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 11:03:07.269151    9637 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 11:03:07.269180    9637 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 11:03:07.271923    9637 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 11:03:07.272266    9637 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-423000" does not appear in /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:03:07.272363    9637 kubeconfig.go:62] /Users/jenkins/minikube-integration/19389-6671/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-423000" cluster setting kubeconfig missing "stopped-upgrade-423000" context setting]
	I0807 11:03:07.272594    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.273062    9637 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f73f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:03:07.273405    9637 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 11:03:07.276026    9637 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-423000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0807 11:03:07.276032    9637 kubeadm.go:1160] stopping kube-system containers ...
	I0807 11:03:07.276067    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 11:03:07.286754    9637 docker.go:483] Stopping containers: [18a4d38a0c8c 2d9c10a9a9e1 a895a6c8fd77 56e44fe63415 6b9b69239f16 1afe0fd1fec7 d139dcfead8f 4940a26d001e]
	I0807 11:03:07.286815    9637 ssh_runner.go:195] Run: docker stop 18a4d38a0c8c 2d9c10a9a9e1 a895a6c8fd77 56e44fe63415 6b9b69239f16 1afe0fd1fec7 d139dcfead8f 4940a26d001e
	I0807 11:03:07.297554    9637 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 11:03:07.303109    9637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:03:07.306205    9637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:03:07.306211    9637 kubeadm.go:157] found existing configuration files:
	
	I0807 11:03:07.306235    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0807 11:03:07.309212    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:03:07.309232    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:03:07.311731    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0807 11:03:07.314249    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:03:07.314269    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:03:07.317315    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0807 11:03:07.319796    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:03:07.319820    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:03:07.322383    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0807 11:03:07.325311    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:03:07.325332    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:03:07.327876    9637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:03:07.330541    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:07.352951    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:07.914828    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.026502    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.045663    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.066514    9637 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:03:08.066599    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:08.568657    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:09.552533    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:09.552632    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:09.564273    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:09.564356    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:09.575523    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:09.575587    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:09.587871    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:09.587938    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:09.598674    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:09.598740    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:09.609489    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:09.609564    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:09.620444    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:09.620527    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:09.632545    9168 logs.go:276] 0 containers: []
	W0807 11:03:09.632555    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:09.632611    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:09.643212    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:09.643230    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:09.643236    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:09.678715    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:09.678728    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:09.694277    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:09.694291    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:09.706214    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:09.706226    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:09.725213    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:09.725225    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:09.746426    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:09.746438    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:09.762303    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:09.762314    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:09.774900    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:09.774912    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:09.779645    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:09.779652    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:09.791999    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:09.792011    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:09.804427    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:09.804438    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:09.816148    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:09.816159    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:09.830550    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:09.830561    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:09.843262    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:09.843273    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:09.868559    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:09.868566    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:09.880587    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:09.880624    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:09.904082    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:09.904096    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:09.068699    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:09.072905    9637 api_server.go:72] duration metric: took 1.006399709s to wait for apiserver process to appear ...
	I0807 11:03:09.072913    9637 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:03:09.072922    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:12.447827    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:14.074245    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:14.074288    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:17.450374    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:17.450511    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:17.463524    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:17.463608    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:17.479952    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:17.480017    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:17.496593    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:17.496672    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:17.507284    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:17.507356    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:17.518093    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:17.518157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:17.529252    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:17.529317    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:17.538972    9168 logs.go:276] 0 containers: []
	W0807 11:03:17.538984    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:17.539042    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:17.549330    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:17.549356    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:17.549362    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:17.591220    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:17.591231    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:17.626668    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:17.626682    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:17.641238    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:17.641250    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:17.659591    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:17.659601    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:17.674374    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:17.674385    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:17.678859    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:17.678868    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:17.693604    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:17.693614    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:17.705233    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:17.705249    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:17.717194    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:17.717205    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:17.728242    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:17.728256    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:17.739482    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:17.739492    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:17.761478    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:17.761487    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:17.774999    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:17.775011    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:17.787260    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:17.787274    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:17.798809    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:17.798820    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:17.813568    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:17.813579    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:20.339823    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:19.074945    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:19.075032    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:25.342063    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:25.342188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:25.357384    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:25.357466    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:25.369793    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:25.369858    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:25.380740    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:25.380805    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:25.391012    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:25.391079    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:25.402149    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:25.402217    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:25.413038    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:25.413106    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:25.423409    9168 logs.go:276] 0 containers: []
	W0807 11:03:25.423421    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:25.423476    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:25.433758    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:25.433778    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:25.433783    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:25.448209    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:25.448220    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:25.462791    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:25.462801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:25.478909    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:25.478923    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:25.497562    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:25.497576    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:25.509726    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:25.509742    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:25.552614    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:25.552636    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:25.583031    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:25.583044    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:25.601236    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:25.601246    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:25.624229    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:25.624242    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:25.646535    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:25.646543    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:25.658177    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:25.658191    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:25.670135    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:25.670150    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:25.705168    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:25.705179    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:25.716853    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:25.716864    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:25.732451    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:25.732463    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:25.746268    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:25.746280    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:24.075298    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:24.075374    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:28.252954    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:29.075878    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:29.075931    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:33.255259    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:33.255560    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:03:33.287858    9168 logs.go:276] 2 containers: [246aeeaf4658 9827cca0f570]
	I0807 11:03:33.288000    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:03:33.307412    9168 logs.go:276] 2 containers: [e81801e7ff22 a88a4a0a0efd]
	I0807 11:03:33.307509    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:03:33.322117    9168 logs.go:276] 1 containers: [67aeec01045e]
	I0807 11:03:33.322199    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:03:33.334420    9168 logs.go:276] 2 containers: [4d5b780006e3 45ecbb03d4e7]
	I0807 11:03:33.334490    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:03:33.345070    9168 logs.go:276] 1 containers: [2022224b42ea]
	I0807 11:03:33.345145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:03:33.355797    9168 logs.go:276] 2 containers: [4823c9381ef1 61ffc7a70f75]
	I0807 11:03:33.355867    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:03:33.366072    9168 logs.go:276] 0 containers: []
	W0807 11:03:33.366083    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:03:33.366140    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:03:33.377629    9168 logs.go:276] 2 containers: [6595f6c8ecd9 f6d56b367ff8]
	I0807 11:03:33.377646    9168 logs.go:123] Gathering logs for kube-controller-manager [4823c9381ef1] ...
	I0807 11:03:33.377655    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4823c9381ef1"
	I0807 11:03:33.395183    9168 logs.go:123] Gathering logs for storage-provisioner [6595f6c8ecd9] ...
	I0807 11:03:33.395195    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6595f6c8ecd9"
	I0807 11:03:33.407055    9168 logs.go:123] Gathering logs for storage-provisioner [f6d56b367ff8] ...
	I0807 11:03:33.407065    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d56b367ff8"
	I0807 11:03:33.418918    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:03:33.418927    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:03:33.460200    9168 logs.go:123] Gathering logs for kube-apiserver [246aeeaf4658] ...
	I0807 11:03:33.460211    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 246aeeaf4658"
	I0807 11:03:33.474319    9168 logs.go:123] Gathering logs for coredns [67aeec01045e] ...
	I0807 11:03:33.474330    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67aeec01045e"
	I0807 11:03:33.486411    9168 logs.go:123] Gathering logs for etcd [e81801e7ff22] ...
	I0807 11:03:33.486423    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81801e7ff22"
	I0807 11:03:33.501026    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:03:33.501035    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:03:33.505422    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:03:33.505429    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:03:33.539974    9168 logs.go:123] Gathering logs for kube-apiserver [9827cca0f570] ...
	I0807 11:03:33.539984    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9827cca0f570"
	I0807 11:03:33.551130    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:03:33.551142    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:03:33.573375    9168 logs.go:123] Gathering logs for etcd [a88a4a0a0efd] ...
	I0807 11:03:33.573382    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88a4a0a0efd"
	I0807 11:03:33.599186    9168 logs.go:123] Gathering logs for kube-scheduler [45ecbb03d4e7] ...
	I0807 11:03:33.599196    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ecbb03d4e7"
	I0807 11:03:33.613392    9168 logs.go:123] Gathering logs for kube-proxy [2022224b42ea] ...
	I0807 11:03:33.613403    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2022224b42ea"
	I0807 11:03:33.625432    9168 logs.go:123] Gathering logs for kube-scheduler [4d5b780006e3] ...
	I0807 11:03:33.625442    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5b780006e3"
	I0807 11:03:33.637076    9168 logs.go:123] Gathering logs for kube-controller-manager [61ffc7a70f75] ...
	I0807 11:03:33.637086    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ffc7a70f75"
	I0807 11:03:33.653061    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:03:33.653076    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:03:34.076581    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:34.076629    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:36.166688    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:41.167598    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:41.167660    9168 kubeadm.go:597] duration metric: took 4m4.917555333s to restartPrimaryControlPlane
	W0807 11:03:41.167720    9168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0807 11:03:41.167745    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0807 11:03:42.169497    9168 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001747833s)
	I0807 11:03:42.169552    9168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 11:03:42.174678    9168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:03:42.177639    9168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:03:42.180370    9168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:03:42.180376    9168 kubeadm.go:157] found existing configuration files:
	
	I0807 11:03:42.180399    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf
	I0807 11:03:42.183078    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:03:42.183100    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:03:42.186368    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf
	I0807 11:03:42.189220    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:03:42.189242    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:03:42.191801    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf
	I0807 11:03:42.194919    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:03:42.194942    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:03:42.197917    9168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf
	I0807 11:03:42.200397    9168 kubeadm.go:163] "https://control-plane.minikube.internal:51250" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51250 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:03:42.200421    9168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:03:42.203071    9168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 11:03:42.221016    9168 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0807 11:03:42.221095    9168 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 11:03:42.268888    9168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 11:03:42.268958    9168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 11:03:42.269005    9168 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 11:03:42.318553    9168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 11:03:42.322813    9168 out.go:204]   - Generating certificates and keys ...
	I0807 11:03:42.322847    9168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 11:03:42.322886    9168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 11:03:42.322930    9168 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 11:03:42.322962    9168 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0807 11:03:42.322995    9168 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0807 11:03:42.323021    9168 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0807 11:03:42.323050    9168 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0807 11:03:42.323087    9168 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0807 11:03:42.323121    9168 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 11:03:42.323156    9168 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 11:03:42.323176    9168 kubeadm.go:310] [certs] Using the existing "sa" key
	I0807 11:03:42.323202    9168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 11:03:42.411599    9168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 11:03:42.512812    9168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 11:03:42.662691    9168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 11:03:42.743147    9168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 11:03:42.773453    9168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 11:03:42.774811    9168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 11:03:42.774837    9168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 11:03:42.844379    9168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 11:03:39.077523    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:39.077572    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:42.848698    9168 out.go:204]   - Booting up control plane ...
	I0807 11:03:42.848749    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 11:03:42.848787    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 11:03:42.848839    9168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 11:03:42.848886    9168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 11:03:42.848980    9168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 11:03:47.348663    9168 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503128 seconds
	I0807 11:03:47.348740    9168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 11:03:47.351965    9168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 11:03:47.881955    9168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 11:03:47.882350    9168 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-210000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 11:03:48.387143    9168 kubeadm.go:310] [bootstrap-token] Using token: k3wlnb.r7zejrhmlgya4r9l
	I0807 11:03:44.078486    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:44.078510    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:48.393787    9168 out.go:204]   - Configuring RBAC rules ...
	I0807 11:03:48.393846    9168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 11:03:48.393889    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 11:03:48.400543    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 11:03:48.401690    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 11:03:48.403428    9168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 11:03:48.404394    9168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 11:03:48.407950    9168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 11:03:48.557151    9168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 11:03:48.791466    9168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 11:03:48.792108    9168 kubeadm.go:310] 
	I0807 11:03:48.792140    9168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 11:03:48.792145    9168 kubeadm.go:310] 
	I0807 11:03:48.792185    9168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 11:03:48.792192    9168 kubeadm.go:310] 
	I0807 11:03:48.792204    9168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 11:03:48.792235    9168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 11:03:48.792262    9168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 11:03:48.792265    9168 kubeadm.go:310] 
	I0807 11:03:48.792287    9168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 11:03:48.792290    9168 kubeadm.go:310] 
	I0807 11:03:48.792311    9168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 11:03:48.792315    9168 kubeadm.go:310] 
	I0807 11:03:48.792338    9168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 11:03:48.792378    9168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 11:03:48.792422    9168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 11:03:48.792425    9168 kubeadm.go:310] 
	I0807 11:03:48.792461    9168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 11:03:48.792500    9168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 11:03:48.792505    9168 kubeadm.go:310] 
	I0807 11:03:48.792546    9168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k3wlnb.r7zejrhmlgya4r9l \
	I0807 11:03:48.792605    9168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d \
	I0807 11:03:48.792622    9168 kubeadm.go:310] 	--control-plane 
	I0807 11:03:48.792625    9168 kubeadm.go:310] 
	I0807 11:03:48.792726    9168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 11:03:48.792736    9168 kubeadm.go:310] 
	I0807 11:03:48.792776    9168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k3wlnb.r7zejrhmlgya4r9l \
	I0807 11:03:48.792825    9168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d 
	I0807 11:03:48.792901    9168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 11:03:48.792911    9168 cni.go:84] Creating CNI manager for ""
	I0807 11:03:48.792922    9168 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:03:48.800849    9168 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 11:03:48.804944    9168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 11:03:48.807824    9168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 11:03:48.812961    9168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 11:03:48.813003    9168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 11:03:48.813040    9168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-210000 minikube.k8s.io/updated_at=2024_08_07T11_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=running-upgrade-210000 minikube.k8s.io/primary=true
	I0807 11:03:48.847341    9168 kubeadm.go:1113] duration metric: took 34.370333ms to wait for elevateKubeSystemPrivileges
	I0807 11:03:48.847359    9168 ops.go:34] apiserver oom_adj: -16
	I0807 11:03:48.855560    9168 kubeadm.go:394] duration metric: took 4m12.634416417s to StartCluster
	I0807 11:03:48.855583    9168 settings.go:142] acquiring lock: {Name:mk55ff1d0ed65f587ff79ec8ce8fd4d10e83296d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:48.855752    9168 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:03:48.856123    9168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:48.856328    9168 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:03:48.856351    9168 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 11:03:48.856410    9168 config.go:182] Loaded profile config "running-upgrade-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:03:48.856414    9168 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-210000"
	I0807 11:03:48.856431    9168 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-210000"
	W0807 11:03:48.856438    9168 addons.go:243] addon storage-provisioner should already be in state true
	I0807 11:03:48.856437    9168 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-210000"
	I0807 11:03:48.856450    9168 host.go:66] Checking if "running-upgrade-210000" exists ...
	I0807 11:03:48.856461    9168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-210000"
	I0807 11:03:48.857308    9168 kapi.go:59] client config for running-upgrade-210000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/running-upgrade-210000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10592ff90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:03:48.857432    9168 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-210000"
	W0807 11:03:48.857436    9168 addons.go:243] addon default-storageclass should already be in state true
	I0807 11:03:48.857442    9168 host.go:66] Checking if "running-upgrade-210000" exists ...
	I0807 11:03:48.859787    9168 out.go:177] * Verifying Kubernetes components...
	I0807 11:03:48.860110    9168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 11:03:48.863938    9168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 11:03:48.863946    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 11:03:48.867821    9168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:48.871785    9168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:48.875880    9168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:03:48.875886    9168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 11:03:48.875892    9168 sshutil.go:53] new ssh client: &{IP:localhost Port:51218 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/running-upgrade-210000/id_rsa Username:docker}
	I0807 11:03:48.944424    9168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:03:48.949494    9168 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:03:48.949541    9168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:48.953566    9168 api_server.go:72] duration metric: took 97.226417ms to wait for apiserver process to appear ...
	I0807 11:03:48.953574    9168 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:03:48.953581    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:48.967820    9168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:03:49.014590    9168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 11:03:49.079620    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:49.079644    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:53.955848    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:53.955952    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:54.081199    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:54.081271    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:58.956730    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:58.956766    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:59.083492    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:59.083532    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:03.957361    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:03.957405    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:04.085669    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:04.085689    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:08.958306    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:08.958327    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:09.085960    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:09.086082    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:09.100128    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:09.100198    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:09.111954    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:09.112027    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:09.122298    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:09.122367    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:09.132716    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:09.132787    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:09.142741    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:09.142799    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:09.153515    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:09.153589    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:09.164163    9637 logs.go:276] 0 containers: []
	W0807 11:04:09.164174    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:09.164229    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:09.174538    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:09.174562    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:09.174567    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:09.178831    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:09.178840    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:09.286102    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:09.286113    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:09.300789    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:09.300802    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:09.314628    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:09.314640    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:09.333273    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:09.333283    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:09.349797    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:09.349809    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:09.361913    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:09.361925    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:09.399294    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:09.399303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:09.413909    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:09.413922    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:09.425551    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:09.425564    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:09.439834    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:09.439845    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:09.451549    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:09.451561    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:09.479446    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:09.479457    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:09.495469    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:09.495480    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:09.510175    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:09.510188    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:09.522264    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:09.522274    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:12.047937    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:13.959181    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:13.959242    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:17.050343    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:17.050546    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:17.072600    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:17.072710    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:17.088385    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:17.088463    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:17.106345    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:17.106409    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:17.117271    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:17.117331    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:17.127579    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:17.127648    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:17.138214    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:17.138281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:17.148545    9637 logs.go:276] 0 containers: []
	W0807 11:04:17.148557    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:17.148608    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:17.163049    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:17.163066    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:17.163072    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:17.167553    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:17.167562    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:17.181326    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:17.181339    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:17.218495    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:17.218504    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:17.242592    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:17.242601    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:17.253573    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:17.253585    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:17.268218    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:17.268228    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:17.293655    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:17.293666    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:17.307703    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:17.307713    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:17.321871    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:17.321887    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:17.336405    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:17.336414    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:17.354687    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:17.354697    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:17.366613    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:17.366624    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:17.385788    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:17.385798    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:17.424306    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:17.424317    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:17.435646    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:17.435660    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:17.449116    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:17.449132    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:18.960749    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:18.960772    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0807 11:04:19.305366    9168 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0807 11:04:19.310714    9168 out.go:177] * Enabled addons: storage-provisioner
	I0807 11:04:19.319596    9168 addons.go:510] duration metric: took 30.4634615s for enable addons: enabled=[storage-provisioner]
	I0807 11:04:19.960845    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:23.962448    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:23.962506    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:24.963311    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:24.963549    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:24.978435    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:24.978504    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:24.989570    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:24.989630    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:25.001044    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:25.001112    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:25.014420    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:25.014485    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:25.024727    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:25.024789    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:25.035322    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:25.035387    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:25.048966    9637 logs.go:276] 0 containers: []
	W0807 11:04:25.048980    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:25.049037    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:25.059472    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:25.059503    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:25.059509    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:25.071527    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:25.071538    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:25.083605    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:25.083615    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:25.121213    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:25.121222    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:25.125293    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:25.125299    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:25.139439    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:25.139450    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:25.157000    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:25.157011    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:25.169167    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:25.169176    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:25.194443    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:25.194459    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:25.208861    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:25.208871    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:25.220453    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:25.220468    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:25.247592    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:25.247608    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:25.286365    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:25.286382    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:25.300598    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:25.300613    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:25.315238    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:25.315252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:25.331140    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:25.331152    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:25.346362    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:25.346377    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:27.862306    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:28.964808    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:28.964854    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:32.864901    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:32.865349    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:32.904989    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:32.905126    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:32.926120    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:32.926226    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:32.941483    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:32.941556    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:32.954199    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:32.954271    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:32.967393    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:32.967463    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:32.978629    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:32.978723    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:32.989474    9637 logs.go:276] 0 containers: []
	W0807 11:04:32.989484    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:32.989538    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:33.000668    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:33.000687    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:33.000693    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:33.014804    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:33.014815    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:33.039664    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:33.039677    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:33.055558    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:33.055574    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:33.068246    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:33.068258    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:33.105591    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:33.105602    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:33.120245    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:33.120254    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:33.131469    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:33.131481    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:33.143201    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:33.143212    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:33.169453    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:33.169461    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:33.173293    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:33.173299    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:33.190284    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:33.190294    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:33.215803    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:33.215814    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:33.230131    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:33.230141    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:33.246354    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:33.246364    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:33.261095    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:33.261108    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:33.297599    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:33.297610    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:33.965962    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:33.966017    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:35.814211    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:38.968180    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:38.968230    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:40.816757    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:40.817195    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:40.851613    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:40.851759    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:40.873560    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:40.873656    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:40.888412    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:40.888477    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:40.900559    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:40.900631    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:40.912267    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:40.912339    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:40.923379    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:40.923449    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:40.934256    9637 logs.go:276] 0 containers: []
	W0807 11:04:40.934277    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:40.934338    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:40.945257    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:40.945274    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:40.945280    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:40.966225    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:40.966238    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:40.985418    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:40.985429    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:41.000001    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:41.000011    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:41.025248    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:41.025255    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:41.063173    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:41.063184    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:41.078171    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:41.078183    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:41.089510    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:41.089519    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:41.126631    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:41.126644    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:41.141246    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:41.141257    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:41.154076    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:41.154087    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:41.166025    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:41.166040    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:41.178300    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:41.178309    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:41.182268    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:41.182276    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:41.198033    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:41.198047    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:41.216627    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:41.216638    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:41.248550    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:41.248562    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:43.760461    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:43.970472    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:43.970524    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:48.762361    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:48.762476    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:48.972753    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:48.972833    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:48.984698    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:04:48.984771    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:48.996320    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:04:48.996390    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:49.022093    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:04:49.022170    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:49.036654    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:04:49.036728    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:49.048868    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:04:49.048940    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:49.063638    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:04:49.063705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:49.074045    9168 logs.go:276] 0 containers: []
	W0807 11:04:49.074059    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:49.074119    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:49.085224    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:04:49.085238    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:04:49.085243    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:04:49.100480    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:49.100491    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:49.147528    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:04:49.147545    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:04:49.162803    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:04:49.162815    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:04:49.175581    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:04:49.175593    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:04:49.192292    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:04:49.192301    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:04:49.213741    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:04:49.213753    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:04:49.226369    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:49.226381    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:49.249407    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:04:49.249417    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:49.260402    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:49.260413    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:49.295497    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:49.295516    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:49.300331    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:04:49.300339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:04:49.314582    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:04:49.314592    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:04:48.773648    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:48.773720    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:48.785009    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:48.785089    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:48.796212    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:48.796281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:48.807119    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:48.807185    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:48.817454    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:48.817514    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:48.828166    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:48.828230    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:48.838658    9637 logs.go:276] 0 containers: []
	W0807 11:04:48.838669    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:48.838721    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:48.851912    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:48.851934    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:48.851940    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:48.891416    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:48.891423    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:48.927675    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:48.927687    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:48.952917    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:48.952928    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:48.969997    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:48.970008    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:48.984343    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:48.984356    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:49.002989    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:49.003002    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:49.020106    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:49.020120    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:49.025526    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:49.025546    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:49.038761    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:49.038772    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:49.057788    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:49.057800    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:49.083070    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:49.083083    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:49.098236    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:49.098253    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:49.113502    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:49.113517    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:49.125564    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:49.125577    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:49.141307    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:49.141325    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:49.157049    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:49.157061    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:51.672755    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:51.827964    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:56.673960    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:56.674075    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:56.685591    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:56.685670    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:56.696428    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:56.696495    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:56.711350    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:56.711422    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:56.722670    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:56.722744    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:56.733333    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:56.733401    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:56.743639    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:56.743704    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:56.757965    9637 logs.go:276] 0 containers: []
	W0807 11:04:56.757979    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:56.758030    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:56.768784    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:56.768800    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:56.768806    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:56.793532    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:56.793544    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:56.808292    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:56.808303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:56.825157    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:56.825166    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:56.840595    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:56.840613    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:56.845335    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:56.845347    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:56.883691    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:56.883710    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:56.901362    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:56.901376    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:56.913989    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:56.914001    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:56.930917    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:56.930928    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:56.957134    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:56.957147    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:56.997865    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:56.997880    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:57.012723    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:57.012736    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:57.028634    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:57.028643    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:57.041767    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:57.041776    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:57.053977    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:57.053990    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:57.069786    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:57.069797    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:56.830431    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:56.830512    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:56.842235    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:04:56.842304    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:56.853958    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:04:56.854027    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:56.865257    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:04:56.865334    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:56.876148    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:04:56.876221    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:56.892084    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:04:56.892158    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:56.903796    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:04:56.903870    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:56.915059    9168 logs.go:276] 0 containers: []
	W0807 11:04:56.915070    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:56.915131    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:56.926205    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:04:56.926221    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:04:56.926227    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:04:56.938681    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:04:56.938694    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:56.951038    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:56.951050    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:56.955961    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:04:56.955968    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:04:56.973835    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:04:56.973846    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:04:56.990604    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:04:56.990615    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:04:57.009157    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:04:57.009175    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:04:57.025596    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:04:57.025609    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:04:57.038684    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:57.038696    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:57.064494    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:57.064508    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:57.102511    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:57.102524    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:57.137093    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:04:57.137104    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:04:57.151765    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:04:57.151776    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:04:59.672401    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:59.588158    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:04.673114    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:04.673166    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:04.688053    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:04.688115    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:04.703610    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:04.703677    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:04.719328    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:04.719396    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:04.730635    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:04.730705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:04.745680    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:04.745745    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:04.759994    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:04.760058    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:04.771189    9168 logs.go:276] 0 containers: []
	W0807 11:05:04.771201    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:04.771258    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:04.782763    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:04.782779    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:04.782785    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:04.819895    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:04.819906    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:04.835863    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:04.835875    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:04.855993    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:04.856007    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:04.868836    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:04.868850    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:04.881149    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:04.881166    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:04.919423    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:04.919438    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:04.924659    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:04.924667    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:04.939157    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:04.939173    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:04.951918    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:04.951932    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:04.967573    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:04.967584    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:04.980257    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:04.980269    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:04.998401    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:04.998412    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:04.590718    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:04.590899    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:04.607753    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:04.607838    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:04.619122    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:04.619185    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:04.630193    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:04.630259    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:04.640803    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:04.640868    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:04.651456    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:04.651516    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:04.662827    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:04.662889    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:04.672984    9637 logs.go:276] 0 containers: []
	W0807 11:05:04.672995    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:04.673047    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:04.684506    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:04.684524    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:04.684530    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:04.696894    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:04.696904    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:04.701071    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:04.701080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:04.716294    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:04.716304    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:04.744484    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:04.744501    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:04.757404    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:04.757416    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:04.795113    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:04.795125    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:04.807109    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:04.807122    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:04.848068    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:04.848090    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:04.863175    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:04.863188    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:04.879474    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:04.879488    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:04.902875    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:04.902886    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:04.927906    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:04.927918    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:04.940685    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:04.940693    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:04.957257    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:04.957272    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:04.973591    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:04.973604    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:04.993438    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:04.993449    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:07.507799    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:07.526682    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:12.510092    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:12.510434    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:12.548560    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:12.548676    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:12.575156    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:12.575215    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:12.594364    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:12.594433    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:12.607328    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:12.607365    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:12.619503    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:12.619555    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:12.636290    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:12.636358    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:12.647666    9637 logs.go:276] 0 containers: []
	W0807 11:05:12.647675    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:12.647734    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:12.662877    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:12.662893    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:12.662905    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:12.667768    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:12.667780    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:12.684204    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:12.684217    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:12.696482    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:12.696494    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:12.709117    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:12.709133    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:12.728219    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:12.728229    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:12.744215    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:12.744229    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:12.759049    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:12.759062    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:12.774003    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:12.774014    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:12.790045    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:12.790056    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:12.805353    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:12.805362    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:12.817956    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:12.817967    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:12.861152    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:12.861162    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:12.897737    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:12.897749    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:12.922239    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:12.922252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:12.933662    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:12.933672    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:12.945437    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:12.945450    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:12.528852    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:12.529108    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:12.554108    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:12.554188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:12.570629    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:12.570694    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:12.584518    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:12.584586    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:12.596209    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:12.596296    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:12.606999    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:12.607075    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:12.618087    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:12.618157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:12.630242    9168 logs.go:276] 0 containers: []
	W0807 11:05:12.630254    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:12.630314    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:12.642533    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:12.642550    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:12.642557    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:12.647662    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:12.647674    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:12.662795    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:12.662812    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:12.675230    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:12.675242    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:12.689587    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:12.689602    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:12.701531    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:12.701546    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:12.714064    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:12.714079    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:12.751786    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:12.751800    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:12.790121    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:12.790130    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:12.805243    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:12.805255    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:12.817428    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:12.817441    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:12.833734    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:12.833745    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:12.853995    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:12.854006    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:15.381966    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:15.470962    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:20.384261    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:20.384512    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:20.414868    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:20.414967    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:20.433587    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:20.433670    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:20.445766    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:20.445841    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:20.456461    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:20.456529    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:20.466559    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:20.466630    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:20.477613    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:20.477684    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:20.488988    9168 logs.go:276] 0 containers: []
	W0807 11:05:20.489001    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:20.489064    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:20.500579    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:20.500594    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:20.500600    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:20.505428    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:20.505437    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:20.542595    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:20.542604    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:20.555748    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:20.555759    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:20.567828    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:20.567839    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:20.583557    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:20.583572    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:20.603746    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:20.603759    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:20.617048    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:20.617060    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:20.655881    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:20.655894    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:20.668714    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:20.668729    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:20.693679    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:20.693699    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:20.708956    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:20.708968    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:20.731095    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:20.731108    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:20.473106    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:20.473186    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:20.484884    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:20.484955    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:20.496344    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:20.496418    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:20.507470    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:20.507535    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:20.518737    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:20.518809    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:20.530172    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:20.530241    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:20.541507    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:20.541579    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:20.553211    9637 logs.go:276] 0 containers: []
	W0807 11:05:20.553224    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:20.553292    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:20.564572    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:20.564590    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:20.564596    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:20.605221    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:20.605234    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:20.609778    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:20.609790    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:20.642138    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:20.642148    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:20.657124    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:20.657133    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:20.672870    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:20.672879    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:20.709915    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:20.709924    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:20.725369    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:20.725380    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:20.741591    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:20.741608    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:20.755415    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:20.755426    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:20.773025    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:20.773036    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:20.787024    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:20.787036    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:20.798942    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:20.798958    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:20.824091    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:20.824098    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:20.835817    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:20.835828    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:20.853511    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:20.853520    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:20.874424    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:20.874434    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:23.387830    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:23.248963    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:28.390097    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:28.390168    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:28.402171    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:28.402250    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:28.414213    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:28.414277    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:28.425311    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:28.425386    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:28.436834    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:28.436913    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:28.449322    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:28.449391    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:28.461039    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:28.461111    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:28.471890    9637 logs.go:276] 0 containers: []
	W0807 11:05:28.471903    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:28.471968    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:28.483700    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:28.483715    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:28.483720    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:28.502381    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:28.502395    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:28.522930    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:28.522941    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:28.537421    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:28.537432    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:28.552638    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:28.552649    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:28.569592    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:28.569604    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:28.581747    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:28.581761    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:28.608451    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:28.608468    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:28.625208    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:28.625219    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:28.640256    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:28.640268    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:28.652554    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:28.652565    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:28.691343    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:28.691350    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:28.695951    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:28.695958    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:28.708539    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:28.708550    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:28.723536    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:28.723546    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:28.748505    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:28.748512    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:28.251351    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:28.251787    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:28.293014    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:28.293157    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:28.315791    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:28.315899    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:28.331253    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:28.331329    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:28.344272    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:28.344339    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:28.358891    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:28.358958    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:28.370382    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:28.370458    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:28.385192    9168 logs.go:276] 0 containers: []
	W0807 11:05:28.385203    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:28.385261    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:28.396600    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:28.396616    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:28.396621    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:28.410838    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:28.410850    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:28.427602    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:28.427611    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:28.442121    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:28.442136    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:28.457545    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:28.457563    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:28.471195    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:28.471208    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:28.483692    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:28.483704    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:28.502594    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:28.502603    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:28.541205    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:28.541226    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:28.546730    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:28.546748    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:28.583931    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:28.583942    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:28.600409    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:28.600422    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:28.614821    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:28.614834    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:28.787893    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:28.787905    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:31.309474    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:31.143488    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:36.312027    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:36.312137    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:36.328806    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:36.328874    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:36.340690    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:36.340759    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:36.352710    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:36.352779    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:36.364193    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:36.364263    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:36.376416    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:36.376506    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:36.388149    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:36.388221    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:36.398959    9637 logs.go:276] 0 containers: []
	W0807 11:05:36.398969    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:36.399027    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:36.411024    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:36.411039    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:36.411044    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:36.425453    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:36.425462    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:36.441565    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:36.441574    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:36.455971    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:36.455985    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:36.468955    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:36.468967    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:36.481244    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:36.481256    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:36.495138    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:36.495152    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:36.521847    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:36.521858    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:36.533931    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:36.533945    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:36.572385    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:36.572393    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:36.576728    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:36.576735    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:36.604323    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:36.604335    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:36.621477    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:36.621488    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:36.642385    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:36.642397    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:36.676801    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:36.676813    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:36.693905    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:36.693916    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:36.707999    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:36.708008    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:36.145699    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:36.145925    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:36.169700    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:36.169808    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:36.185812    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:36.185891    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:36.199147    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:36.199212    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:36.210129    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:36.210188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:36.220761    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:36.220830    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:36.232040    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:36.232115    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:36.242202    9168 logs.go:276] 0 containers: []
	W0807 11:05:36.242216    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:36.242272    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:36.252301    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:36.252317    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:36.252323    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:36.263999    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:36.264009    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:36.275769    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:36.275781    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:36.312368    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:36.312376    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:36.327448    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:36.327460    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:36.340150    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:36.340162    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:36.356474    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:36.356484    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:36.380850    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:36.380863    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:36.394161    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:36.394172    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:36.421088    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:36.421101    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:36.434283    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:36.434295    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:36.439979    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:36.439991    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:36.485852    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:36.485865    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:39.002789    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:39.224419    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:44.005037    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:44.005206    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:44.018867    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:44.018953    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:44.033084    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:44.033153    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:44.043029    9168 logs.go:276] 2 containers: [6a43bd083386 1ccd3a59766f]
	I0807 11:05:44.043093    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:44.059767    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:44.059840    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:44.072094    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:44.072167    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:44.086026    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:44.086091    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:44.101399    9168 logs.go:276] 0 containers: []
	W0807 11:05:44.101413    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:44.101468    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:44.111875    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:44.111892    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:44.111897    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:44.117200    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:44.117210    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:44.153210    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:44.153219    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:44.172072    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:44.172084    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:44.186668    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:44.186682    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:44.198209    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:44.198221    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:44.209418    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:44.209430    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:44.233488    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:44.233512    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:44.270928    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:44.270941    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:44.286376    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:44.286394    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:44.299788    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:44.299800    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:44.312568    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:44.312579    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:44.332361    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:44.332378    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:44.225890    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:44.225964    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:44.237003    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:44.237070    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:44.252791    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:44.252856    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:44.264643    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:44.264711    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:44.276194    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:44.276276    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:44.294070    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:44.294143    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:44.306264    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:44.306336    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:44.322556    9637 logs.go:276] 0 containers: []
	W0807 11:05:44.322569    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:44.322629    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:44.333992    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:44.334009    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:44.334014    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:44.346795    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:44.346806    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:44.384889    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:44.384903    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:44.396092    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:44.396103    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:44.410235    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:44.410247    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:44.425046    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:44.425059    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:44.450205    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:44.450215    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:44.461579    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:44.461592    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:44.472795    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:44.472806    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:44.484249    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:44.484260    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:44.488676    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:44.488685    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:44.505575    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:44.505589    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:44.521216    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:44.521226    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:44.537019    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:44.537032    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:44.574626    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:44.574636    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:44.597564    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:44.597577    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:44.614773    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:44.614787    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:47.140243    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:46.846463    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:52.141601    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:52.141680    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:52.153336    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:52.153413    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:52.164440    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:52.164511    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:52.175978    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:52.176063    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:52.187468    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:52.187545    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:52.197916    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:52.197987    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:52.209523    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:52.209594    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:52.220063    9637 logs.go:276] 0 containers: []
	W0807 11:05:52.220075    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:52.220132    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:52.230923    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:52.230944    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:52.230949    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:52.245776    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:52.245789    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:52.260117    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:52.260129    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:52.272132    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:52.272143    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:52.311553    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:52.311564    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:52.349447    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:52.349457    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:52.363788    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:52.363798    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:52.377627    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:52.377641    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:52.392646    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:52.392655    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:52.406514    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:52.406525    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:52.424212    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:52.424222    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:52.449108    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:52.449117    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:52.453244    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:52.453251    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:52.477481    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:52.477491    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:52.492806    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:52.492818    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:52.504374    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:52.504386    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:52.521662    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:52.521672    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:51.849030    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:51.849222    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:51.865067    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:51.865150    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:51.877036    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:51.877103    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:51.887636    9168 logs.go:276] 3 containers: [0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:05:51.887708    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:51.898375    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:51.898442    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:51.909296    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:51.909364    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:51.919843    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:51.919914    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:51.929950    9168 logs.go:276] 0 containers: []
	W0807 11:05:51.929962    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:51.930023    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:51.940735    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:51.940751    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:51.940756    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:51.955357    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:51.955370    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:51.967959    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:05:51.967970    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:05:51.980099    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:05:51.980112    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:05:51.993904    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:51.993918    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:52.012916    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:52.012930    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:52.024721    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:52.024731    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:52.047843    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:52.047851    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:52.059024    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:52.059035    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:52.063637    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:52.063646    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:52.077554    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:52.077564    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:52.088998    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:05:52.089012    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:05:52.110112    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:52.110125    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:52.147364    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:52.147377    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:54.688241    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:55.035702    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:59.690859    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:59.691133    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:59.717187    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:05:59.717308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:59.734100    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:05:59.734178    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:59.747553    9168 logs.go:276] 3 containers: [0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:05:59.747624    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:59.760669    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:05:59.760733    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:59.771035    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:05:59.771089    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:59.782713    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:05:59.782780    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:59.793127    9168 logs.go:276] 0 containers: []
	W0807 11:05:59.793137    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:59.793188    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:59.803748    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:05:59.803762    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:59.803767    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:59.840164    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:05:59.840172    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:05:59.851704    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:05:59.851713    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:05:59.869981    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:05:59.869991    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:05:59.882292    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:59.882303    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:59.887077    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:05:59.887083    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:05:59.907141    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:05:59.907152    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:05:59.924121    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:05:59.924131    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:05:59.935670    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:59.935680    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:59.958977    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:05:59.958988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:59.971665    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:59.971679    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:00.005603    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:00.005617    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:00.017394    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:00.017407    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:00.029066    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:00.029079    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:00.038302    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:00.038398    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:00.049210    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:00.049281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:00.059573    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:00.059652    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:00.070194    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:00.070262    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:00.080719    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:00.080790    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:00.091470    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:00.091535    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:00.102018    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:00.102092    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:00.112107    9637 logs.go:276] 0 containers: []
	W0807 11:06:00.112117    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:00.112168    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:00.122239    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:00.122257    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:00.122263    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:00.136372    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:00.136382    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:00.151583    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:00.151593    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:00.163449    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:00.163459    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:00.175110    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:00.175120    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:00.187263    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:00.187273    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:00.191722    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:00.191731    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:00.205689    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:00.205699    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:00.227914    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:00.227924    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:00.265860    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:00.265870    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:00.303069    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:00.303080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:00.317272    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:00.317285    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:00.328557    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:00.328570    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:00.346145    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:00.346155    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:00.371919    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:00.371931    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:00.386403    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:00.386414    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:00.402312    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:00.402322    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:02.915897    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:02.549123    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:07.918098    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:07.918186    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:07.928762    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:07.928835    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:07.939451    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:07.939525    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:07.950366    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:07.950434    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:07.961867    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:07.961940    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:07.973038    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:07.973112    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:07.984074    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:07.984143    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:07.994866    9637 logs.go:276] 0 containers: []
	W0807 11:06:07.994876    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:07.994933    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:08.005458    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:08.005476    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:08.005482    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:08.023795    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:08.023807    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:08.038581    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:08.038592    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:08.049620    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:08.049634    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:08.060919    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:08.060929    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:08.064966    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:08.064972    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:08.090100    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:08.090110    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:08.108514    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:08.108523    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:08.132814    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:08.132824    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:08.143891    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:08.143903    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:08.161364    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:08.161375    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:08.195227    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:08.195240    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:08.209358    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:08.209371    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:08.224043    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:08.224057    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:08.235726    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:08.235737    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:08.247821    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:08.247834    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:08.285920    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:08.285937    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:07.551517    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:07.551705    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:07.574717    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:07.574822    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:07.591055    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:07.591145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:07.603933    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:07.604006    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:07.615365    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:07.615438    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:07.626401    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:07.626472    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:07.642444    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:07.642523    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:07.652968    9168 logs.go:276] 0 containers: []
	W0807 11:06:07.652981    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:07.653042    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:07.663412    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:07.663429    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:07.663434    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:07.678880    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:07.678894    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:07.704188    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:07.704200    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:07.721276    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:07.721286    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:07.733989    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:07.734002    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:07.771827    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:07.771840    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:07.784711    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:07.784722    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:07.796408    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:07.796422    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:07.810695    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:07.810704    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:07.825498    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:07.825513    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:07.837169    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:07.837179    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:07.848791    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:07.848801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:07.860211    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:07.860221    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:07.895487    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:07.895495    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:07.899794    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:07.899801    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:10.413424    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:10.802076    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:15.415821    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:15.416225    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:15.452916    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:15.453056    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:15.475360    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:15.475467    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:15.490304    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:15.490383    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:15.506890    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:15.506962    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:15.517994    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:15.518066    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:15.528789    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:15.528858    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:15.539084    9168 logs.go:276] 0 containers: []
	W0807 11:06:15.539094    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:15.539151    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:15.549780    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:15.549798    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:15.549803    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:15.554617    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:15.554628    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:15.569616    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:15.569626    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:15.606747    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:15.606761    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:15.620788    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:15.620800    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:15.635095    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:15.635105    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:15.646538    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:15.646553    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:15.661970    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:15.661983    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:15.673504    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:15.673514    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:15.697719    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:15.697726    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:15.720343    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:15.720353    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:15.761354    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:15.761365    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:15.772885    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:15.772897    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:15.784311    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:15.784324    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:15.797107    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:15.797118    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:15.804324    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:15.804430    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:15.815953    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:15.816024    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:15.830633    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:15.830703    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:15.842938    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:15.843015    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:15.854164    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:15.854241    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:15.864980    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:15.865050    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:15.876204    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:15.876271    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:15.885897    9637 logs.go:276] 0 containers: []
	W0807 11:06:15.885908    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:15.885960    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:15.896203    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:15.896224    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:15.896231    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:15.918680    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:15.918688    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:15.922928    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:15.922936    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:15.942290    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:15.942300    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:15.966774    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:15.966792    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:15.982564    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:15.982578    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:16.000764    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:16.000774    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:16.012286    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:16.012298    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:16.024883    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:16.024894    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:16.037524    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:16.037535    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:16.076489    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:16.076497    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:16.110093    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:16.110104    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:16.124843    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:16.124855    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:16.138072    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:16.138083    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:16.152521    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:16.152530    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:16.164761    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:16.164773    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:16.179470    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:16.179480    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:18.692648    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:18.317524    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:23.694879    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:23.695076    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:23.706345    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:23.706420    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:23.716644    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:23.716716    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:23.727169    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:23.727240    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:23.737891    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:23.737957    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:23.748272    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:23.748342    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:23.758728    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:23.758795    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:23.320042    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:23.320213    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:23.332817    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:23.332897    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:23.343802    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:23.343863    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:23.354811    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:23.354878    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:23.368218    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:23.368275    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:23.379571    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:23.379635    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:23.399629    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:23.399697    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:23.411311    9168 logs.go:276] 0 containers: []
	W0807 11:06:23.411325    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:23.411383    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:23.426376    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:23.426394    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:23.426399    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:23.440489    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:23.440500    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:23.452065    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:23.452076    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:23.486708    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:23.486718    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:23.490856    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:23.490864    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:23.504879    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:23.504888    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:23.520043    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:23.520054    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:23.531857    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:23.531869    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:23.543095    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:23.543105    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:23.561985    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:23.561998    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:23.586071    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:23.586077    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:23.622414    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:23.622427    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:23.634797    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:23.634810    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:23.650330    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:23.650339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:23.665484    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:23.665495    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:23.768657    9637 logs.go:276] 0 containers: []
	W0807 11:06:23.768667    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:23.768717    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:23.779314    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:23.779333    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:23.779340    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:23.784215    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:23.784222    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:23.795427    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:23.795438    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:23.808226    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:23.808239    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:23.851651    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:23.851662    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:23.866314    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:23.866323    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:23.881853    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:23.881863    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:23.920319    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:23.920328    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:23.934498    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:23.934507    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:23.946598    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:23.946608    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:23.957666    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:23.957679    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:23.974487    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:23.974497    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:24.000290    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:24.000303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:24.015184    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:24.015200    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:24.032621    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:24.032631    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:24.047403    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:24.047412    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:24.059107    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:24.059117    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:26.585448    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:26.179140    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:31.582598    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:31.582741    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:31.594025    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:31.594094    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:31.604930    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:31.605006    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:31.615140    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:31.615212    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:31.625833    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:31.625900    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:31.636312    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:31.636378    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:31.647218    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:31.647288    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:31.657316    9637 logs.go:276] 0 containers: []
	W0807 11:06:31.657326    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:31.657381    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:31.667945    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:31.667962    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:31.667967    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:31.688481    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:31.688490    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:31.712000    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:31.712007    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:31.736684    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:31.736694    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:31.755781    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:31.755791    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:31.767222    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:31.767233    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:31.806562    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:31.806573    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:31.821804    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:31.821815    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:31.841371    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:31.841380    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:31.854227    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:31.854238    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:31.868703    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:31.868716    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:31.908225    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:31.908245    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:31.926815    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:31.926836    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:31.944472    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:31.944486    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:31.959383    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:31.959395    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:31.977127    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:31.977137    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:31.982073    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:31.982080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:31.177118    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:31.177325    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:31.207836    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:31.207921    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:31.221682    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:31.221745    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:31.232868    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:31.232939    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:31.243682    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:31.243740    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:31.254231    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:31.254294    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:31.264590    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:31.264657    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:31.274951    9168 logs.go:276] 0 containers: []
	W0807 11:06:31.274963    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:31.275016    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:31.289783    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:31.289800    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:31.289806    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:31.302748    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:31.302761    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:31.317685    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:31.317696    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:31.329309    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:31.329323    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:31.333764    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:31.333771    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:31.347976    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:31.347986    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:31.360050    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:31.360062    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:31.371967    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:31.371977    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:31.389779    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:31.389792    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:31.415529    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:31.415538    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:31.430425    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:31.430437    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:31.442042    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:31.442054    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:31.477943    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:31.477951    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:31.543998    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:31.544012    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:31.556363    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:31.556374    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:34.066814    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:34.501360    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:39.063903    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:39.064065    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:39.079697    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:39.079771    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:39.090386    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:39.090450    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:39.101053    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:39.101127    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:39.114484    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:39.114556    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:39.124528    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:39.124589    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:39.135370    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:39.135434    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:39.145847    9168 logs.go:276] 0 containers: []
	W0807 11:06:39.145858    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:39.145911    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:39.156068    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:39.156086    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:39.156093    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:39.174814    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:39.174827    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:39.189694    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:39.189704    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:39.201193    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:39.201206    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:39.235613    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:39.235621    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:39.249755    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:39.249768    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:39.261349    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:39.261361    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:39.278687    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:39.278699    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:39.289984    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:39.289999    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:39.314832    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:39.314839    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:39.319520    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:39.319530    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:39.334255    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:39.334268    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:39.346195    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:39.346204    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:39.383071    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:39.383082    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:39.394528    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:39.394541    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:39.498557    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:39.498728    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:39.511692    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:39.511760    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:39.522142    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:39.522211    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:39.533013    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:39.533086    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:39.543336    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:39.543408    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:39.553580    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:39.553637    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:39.564980    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:39.565036    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:39.575350    9637 logs.go:276] 0 containers: []
	W0807 11:06:39.575362    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:39.575413    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:39.586142    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:39.586161    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:39.586167    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:39.609321    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:39.609330    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:39.613854    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:39.613861    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:39.651190    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:39.651201    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:39.665125    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:39.665139    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:39.677086    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:39.677097    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:39.693792    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:39.693805    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:39.731245    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:39.731253    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:39.744981    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:39.744994    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:39.759547    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:39.759556    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:39.775184    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:39.775196    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:39.789558    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:39.789571    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:39.804143    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:39.804153    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:39.816007    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:39.816021    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:39.827267    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:39.827278    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:39.857056    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:39.857071    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:39.872014    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:39.872027    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:42.383840    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:41.906648    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:47.383178    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:47.383262    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:47.394483    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:47.394558    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:47.405258    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:47.405333    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:47.415592    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:47.415657    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:47.426547    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:47.426617    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:47.437262    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:47.437330    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:47.448286    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:47.448352    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:47.458202    9637 logs.go:276] 0 containers: []
	W0807 11:06:47.458215    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:47.458273    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:47.467993    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:47.468010    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:47.468015    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:47.472761    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:47.472768    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:47.506000    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:47.506012    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:47.520171    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:47.520183    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:47.540545    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:47.540555    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:47.551658    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:47.551670    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:47.563038    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:47.563049    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:47.586282    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:47.586292    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:47.616248    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:47.616260    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:47.633219    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:47.633232    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:47.656714    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:47.656722    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:47.669802    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:47.669815    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:47.708240    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:47.708252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:47.722105    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:47.722118    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:47.735029    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:47.735039    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:47.752696    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:47.752706    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:47.767791    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:47.767804    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:46.906478    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:46.906725    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:46.932188    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:46.932308    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:46.953262    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:46.953336    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:46.967618    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:46.967696    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:46.979246    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:46.979317    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:46.990186    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:46.990248    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:47.001799    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:47.001860    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:47.012425    9168 logs.go:276] 0 containers: []
	W0807 11:06:47.012436    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:47.012487    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:47.025092    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:47.025111    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:47.025117    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:47.036842    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:47.036856    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:47.051678    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:47.051688    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:47.065519    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:47.065532    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:47.077739    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:47.077749    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:47.091741    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:47.091753    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:47.107313    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:47.107324    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:47.125160    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:47.125170    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:47.136579    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:47.136589    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:47.172980    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:47.172988    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:47.177965    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:47.177973    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:47.213872    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:47.213882    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:47.228764    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:47.228774    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:47.249322    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:47.249334    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:47.268265    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:47.268274    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:49.795058    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:50.282925    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:54.795959    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:54.796336    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:54.829390    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:06:54.829523    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:54.849250    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:06:54.849347    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:54.863756    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:06:54.863834    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:54.877029    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:06:54.877101    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:54.887709    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:06:54.887776    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:54.898538    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:06:54.898610    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:54.909359    9168 logs.go:276] 0 containers: []
	W0807 11:06:54.909369    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:54.909429    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:54.920652    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:06:54.920668    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:06:54.920673    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:06:54.932544    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:54.932555    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:54.957400    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:06:54.957411    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:54.970878    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:54.970889    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:54.975514    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:06:54.975523    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:06:54.998646    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:06:54.998656    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:06:55.013157    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:06:55.013169    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:06:55.031016    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:55.031026    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:55.065412    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:06:55.065420    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:06:55.078922    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:06:55.078934    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:06:55.090929    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:06:55.090940    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:06:55.112739    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:06:55.112755    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:06:55.124825    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:06:55.124840    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:06:55.136231    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:06:55.136244    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:06:55.151660    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:55.151670    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:55.283331    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:55.283516    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:55.295564    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:55.295635    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:55.306356    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:55.306428    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:55.323165    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:55.323231    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:55.333680    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:55.333747    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:55.347637    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:55.347707    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:55.359445    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:55.359509    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:55.369234    9637 logs.go:276] 0 containers: []
	W0807 11:06:55.369245    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:55.369303    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:55.380007    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:55.380028    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:55.380033    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:55.394447    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:55.394457    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:55.406479    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:55.406489    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:55.431481    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:55.431491    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:55.453350    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:55.453359    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:55.468487    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:55.468498    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:55.480119    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:55.480129    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:55.494045    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:55.494055    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:55.507529    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:55.507539    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:55.518763    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:55.518775    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:55.556597    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:55.556608    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:55.571425    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:55.571440    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:55.588999    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:55.589009    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:55.600952    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:55.600964    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:55.637956    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:55.637964    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:55.642057    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:55.642065    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:55.656481    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:55.656491    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:58.180137    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:57.690857    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:03.181250    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:03.181417    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:03.193205    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:07:03.193272    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:03.231145    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:07:03.231213    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:03.241654    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:07:03.241716    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:03.252534    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:07:03.252599    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:03.263181    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:07:03.263245    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:03.274428    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:07:03.274489    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:03.284675    9637 logs.go:276] 0 containers: []
	W0807 11:07:03.284686    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:03.284736    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:03.295027    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:07:03.295049    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:07:03.295055    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:07:03.315928    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:03.315938    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:03.351419    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:07:03.351433    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:07:03.365628    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:07:03.365638    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:07:03.379588    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:07:03.379600    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:07:03.394925    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:07:03.394938    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:07:03.412900    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:07:03.412910    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:03.425739    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:03.425749    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:03.464688    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:07:03.464700    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:07:03.478744    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:07:03.478753    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:07:03.490467    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:07:03.490476    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:07:03.501858    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:03.501868    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:03.523650    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:03.523658    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:03.527813    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:07:03.527819    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:07:03.553017    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:07:03.553030    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:07:03.567728    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:07:03.567739    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:07:03.579681    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:07:03.579692    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:07:02.692018    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:02.692215    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:02.711123    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:02.711219    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:02.726571    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:02.726649    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:02.739301    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:02.739381    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:02.749689    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:02.749757    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:02.759720    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:02.759781    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:02.769847    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:02.769919    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:02.780099    9168 logs.go:276] 0 containers: []
	W0807 11:07:02.780114    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:02.780176    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:02.792988    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:02.793005    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:02.793009    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:02.805205    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:02.805217    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:02.817301    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:02.817313    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:02.833259    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:02.833271    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:02.851106    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:02.851117    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:02.875652    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:02.875663    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:02.887732    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:02.887743    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:02.923252    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:02.923260    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:02.962449    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:02.962462    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:02.980312    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:02.980323    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:02.985097    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:02.985103    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:02.997554    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:02.997564    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:03.008939    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:03.008950    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:03.023177    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:03.023186    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:03.035061    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:03.035072    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:05.551681    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:06.093452    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:10.553671    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:10.553908    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:10.572392    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:10.572486    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:10.586230    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:10.586311    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:10.599832    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:10.599909    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:10.610316    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:10.610379    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:10.620898    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:10.620968    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:10.631075    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:10.631145    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:10.644636    9168 logs.go:276] 0 containers: []
	W0807 11:07:10.644649    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:10.644704    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:10.654776    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:10.654796    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:10.654802    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:10.691699    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:10.691710    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:10.705240    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:10.705250    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:10.716341    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:10.716353    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:10.728273    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:10.728285    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:10.740510    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:10.740520    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:10.755232    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:10.755243    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:10.766986    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:10.766998    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:10.778566    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:10.778575    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:10.818802    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:10.818812    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:10.834712    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:10.834720    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:10.859783    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:10.859793    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:10.864342    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:10.864351    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:10.879167    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:10.879181    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:10.893059    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:10.893071    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:11.094987    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:11.095036    9637 kubeadm.go:597] duration metric: took 4m3.852897292s to restartPrimaryControlPlane
	W0807 11:07:11.095115    9637 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0807 11:07:11.095145    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0807 11:07:12.128041    9637 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.033005875s)
	I0807 11:07:12.128124    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 11:07:12.132930    9637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:07:12.135560    9637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:07:12.138157    9637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:07:12.138162    9637 kubeadm.go:157] found existing configuration files:
	
	I0807 11:07:12.138179    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0807 11:07:12.140756    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:07:12.140782    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:07:12.143278    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0807 11:07:12.145763    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:07:12.145785    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:07:12.148859    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0807 11:07:12.151264    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:07:12.151285    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:07:12.153904    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0807 11:07:12.156979    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:07:12.157001    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:07:12.159544    9637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 11:07:12.177293    9637 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0807 11:07:12.177420    9637 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 11:07:12.225439    9637 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 11:07:12.225495    9637 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 11:07:12.225581    9637 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 11:07:12.277221    9637 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 11:07:12.280426    9637 out.go:204]   - Generating certificates and keys ...
	I0807 11:07:12.280456    9637 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 11:07:12.280493    9637 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 11:07:12.280528    9637 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 11:07:12.280553    9637 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0807 11:07:12.280612    9637 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0807 11:07:12.280647    9637 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0807 11:07:12.280712    9637 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0807 11:07:12.280748    9637 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0807 11:07:12.280783    9637 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 11:07:12.280823    9637 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 11:07:12.280846    9637 kubeadm.go:310] [certs] Using the existing "sa" key
	I0807 11:07:12.280876    9637 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 11:07:12.359064    9637 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 11:07:12.437833    9637 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 11:07:12.510691    9637 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 11:07:12.649138    9637 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 11:07:12.683389    9637 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 11:07:12.683888    9637 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 11:07:12.683940    9637 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 11:07:12.770381    9637 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 11:07:12.773711    9637 out.go:204]   - Booting up control plane ...
	I0807 11:07:12.773762    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 11:07:12.773806    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 11:07:12.773839    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 11:07:12.773939    9637 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 11:07:12.774023    9637 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 11:07:13.413051    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:17.272225    9637 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501377 seconds
	I0807 11:07:17.272285    9637 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 11:07:17.275841    9637 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 11:07:17.791679    9637 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 11:07:17.792113    9637 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-423000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 11:07:18.296542    9637 kubeadm.go:310] [bootstrap-token] Using token: uoe6y8.pluqtcpnydqamgb7
	I0807 11:07:18.302899    9637 out.go:204]   - Configuring RBAC rules ...
	I0807 11:07:18.302964    9637 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 11:07:18.303019    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 11:07:18.309571    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 11:07:18.310462    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 11:07:18.311379    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 11:07:18.322082    9637 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 11:07:18.334869    9637 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 11:07:18.530908    9637 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 11:07:18.700860    9637 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 11:07:18.701341    9637 kubeadm.go:310] 
	I0807 11:07:18.701370    9637 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 11:07:18.701374    9637 kubeadm.go:310] 
	I0807 11:07:18.701408    9637 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 11:07:18.701417    9637 kubeadm.go:310] 
	I0807 11:07:18.701432    9637 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 11:07:18.701464    9637 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 11:07:18.701546    9637 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 11:07:18.701561    9637 kubeadm.go:310] 
	I0807 11:07:18.701641    9637 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 11:07:18.701651    9637 kubeadm.go:310] 
	I0807 11:07:18.701705    9637 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 11:07:18.701711    9637 kubeadm.go:310] 
	I0807 11:07:18.701783    9637 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 11:07:18.701833    9637 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 11:07:18.701882    9637 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 11:07:18.701888    9637 kubeadm.go:310] 
	I0807 11:07:18.701953    9637 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 11:07:18.702057    9637 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 11:07:18.702068    9637 kubeadm.go:310] 
	I0807 11:07:18.702232    9637 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoe6y8.pluqtcpnydqamgb7 \
	I0807 11:07:18.702297    9637 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d \
	I0807 11:07:18.702334    9637 kubeadm.go:310] 	--control-plane 
	I0807 11:07:18.702338    9637 kubeadm.go:310] 
	I0807 11:07:18.702383    9637 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 11:07:18.702387    9637 kubeadm.go:310] 
	I0807 11:07:18.702434    9637 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoe6y8.pluqtcpnydqamgb7 \
	I0807 11:07:18.702492    9637 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d 
	I0807 11:07:18.702576    9637 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 11:07:18.702586    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:07:18.702594    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:07:18.705299    9637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 11:07:18.713312    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 11:07:18.716924    9637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 11:07:18.722187    9637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 11:07:18.722256    9637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 11:07:18.722273    9637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-423000 minikube.k8s.io/updated_at=2024_08_07T11_07_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=stopped-upgrade-423000 minikube.k8s.io/primary=true
	I0807 11:07:18.773118    9637 ops.go:34] apiserver oom_adj: -16
	I0807 11:07:18.773228    9637 kubeadm.go:1113] duration metric: took 51.035166ms to wait for elevateKubeSystemPrivileges
	I0807 11:07:18.773273    9637 kubeadm.go:394] duration metric: took 4m11.546048291s to StartCluster
	I0807 11:07:18.773299    9637 settings.go:142] acquiring lock: {Name:mk55ff1d0ed65f587ff79ec8ce8fd4d10e83296d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:07:18.773389    9637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:07:18.773815    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:07:18.774006    9637 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:07:18.774048    9637 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 11:07:18.774115    9637 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-423000"
	I0807 11:07:18.774129    9637 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-423000"
	W0807 11:07:18.774132    9637 addons.go:243] addon storage-provisioner should already be in state true
	I0807 11:07:18.774144    9637 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0807 11:07:18.774133    9637 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-423000"
	I0807 11:07:18.774199    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:07:18.774208    9637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-423000"
	I0807 11:07:18.775426    9637 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f73f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:07:18.775539    9637 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-423000"
	W0807 11:07:18.775545    9637 addons.go:243] addon default-storageclass should already be in state true
	I0807 11:07:18.775551    9637 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0807 11:07:18.778244    9637 out.go:177] * Verifying Kubernetes components...
	I0807 11:07:18.778585    9637 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 11:07:18.782382    9637 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 11:07:18.782413    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:07:18.786164    9637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:07:18.414811    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:18.414935    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:18.426296    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:18.426372    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:18.438178    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:18.438248    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:18.448932    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:18.448999    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:18.459833    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:18.459903    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:18.470169    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:18.470247    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:18.480398    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:18.480463    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:18.490941    9168 logs.go:276] 0 containers: []
	W0807 11:07:18.490953    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:18.491011    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:18.501744    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:18.501760    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:18.501766    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:18.507326    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:18.507339    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:18.527991    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:18.528007    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:18.548670    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:18.548683    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:18.567534    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:18.567550    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:18.611041    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:18.611057    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:18.624870    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:18.624884    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:18.638244    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:18.638256    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:18.649854    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:18.649866    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:18.686114    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:18.686133    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:18.704683    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:18.704692    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:18.716772    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:18.716785    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:18.729839    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:18.729852    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:18.749739    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:18.749754    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:18.762906    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:18.762918    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:18.789225    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:07:18.793242    9637 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:07:18.793248    9637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 11:07:18.793254    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:07:18.891044    9637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:07:18.896051    9637 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:07:18.896091    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:07:18.899645    9637 api_server.go:72] duration metric: took 125.63725ms to wait for apiserver process to appear ...
	I0807 11:07:18.899653    9637 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:07:18.899659    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:18.957076    9637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 11:07:18.970026    9637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:07:21.289248    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:23.901496    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:23.901570    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:26.291181    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:26.291286    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:26.306200    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:26.306274    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:26.316779    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:26.316855    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:26.327566    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:26.327637    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:26.338438    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:26.338509    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:26.349068    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:26.349143    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:26.359967    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:26.360043    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:26.370958    9168 logs.go:276] 0 containers: []
	W0807 11:07:26.370967    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:26.371021    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:26.381329    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:26.381346    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:26.381352    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:26.399102    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:26.399112    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:26.412246    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:26.412256    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:26.423931    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:26.423941    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:26.435309    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:26.435321    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:26.446649    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:26.446661    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:26.458876    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:26.458887    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:26.481888    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:26.481895    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:26.516222    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:26.516228    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:26.551369    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:26.551381    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:26.565647    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:26.565655    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:26.576850    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:26.576859    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:26.588554    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:26.588565    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:26.594393    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:26.594400    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:26.606106    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:26.606116    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:29.122483    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:28.902082    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:28.902102    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:34.124458    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:34.124575    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:34.139825    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:34.139909    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:34.153217    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:34.153276    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:34.164064    9168 logs.go:276] 4 containers: [4ea04b21f860 0e834a4d33b2 6a43bd083386 1ccd3a59766f]
	I0807 11:07:34.164135    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:34.174706    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:34.174776    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:34.185783    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:34.185845    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:34.196958    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:34.197019    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:34.206585    9168 logs.go:276] 0 containers: []
	W0807 11:07:34.206595    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:34.206641    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:34.216869    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:34.216886    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:34.216891    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:34.253527    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:34.253537    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:34.264791    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:34.264802    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:34.276461    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:34.276470    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:34.291563    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:34.291574    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:34.296712    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:34.296722    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:34.310954    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:34.310966    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:34.323729    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:34.323742    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:34.335500    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:34.335512    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:34.373044    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:34.373053    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:34.391033    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:34.391047    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:34.402862    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:34.402872    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:34.416885    9168 logs.go:123] Gathering logs for coredns [1ccd3a59766f] ...
	I0807 11:07:34.416896    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ccd3a59766f"
	I0807 11:07:34.428916    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:34.428925    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:34.440776    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:34.440786    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:33.902669    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:33.902694    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:36.966977    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:38.903168    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:38.903189    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:41.969029    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:41.969155    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:41.981825    9168 logs.go:276] 1 containers: [d228b4cb54d3]
	I0807 11:07:41.981907    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:41.992886    9168 logs.go:276] 1 containers: [8bad33df3502]
	I0807 11:07:41.992953    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:42.003918    9168 logs.go:276] 4 containers: [51d9bb212e49 4ea04b21f860 0e834a4d33b2 6a43bd083386]
	I0807 11:07:42.003994    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:42.014982    9168 logs.go:276] 1 containers: [fc8a20f65201]
	I0807 11:07:42.015054    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:42.025538    9168 logs.go:276] 1 containers: [efde69282e35]
	I0807 11:07:42.025606    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:42.044261    9168 logs.go:276] 1 containers: [7a8696ff05a6]
	I0807 11:07:42.044333    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:42.055122    9168 logs.go:276] 0 containers: []
	W0807 11:07:42.055136    9168 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:42.055189    9168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:42.069042    9168 logs.go:276] 1 containers: [9f46b798aefe]
	I0807 11:07:42.069059    9168 logs.go:123] Gathering logs for kube-scheduler [fc8a20f65201] ...
	I0807 11:07:42.069063    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8a20f65201"
	I0807 11:07:42.083679    9168 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:42.083688    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:42.108768    9168 logs.go:123] Gathering logs for coredns [51d9bb212e49] ...
	I0807 11:07:42.108779    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d9bb212e49"
	I0807 11:07:42.120182    9168 logs.go:123] Gathering logs for coredns [4ea04b21f860] ...
	I0807 11:07:42.120201    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ea04b21f860"
	I0807 11:07:42.131696    9168 logs.go:123] Gathering logs for kube-proxy [efde69282e35] ...
	I0807 11:07:42.131706    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efde69282e35"
	I0807 11:07:42.143619    9168 logs.go:123] Gathering logs for container status ...
	I0807 11:07:42.143631    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:42.155504    9168 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:42.155515    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:42.191929    9168 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:42.191936    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:42.196743    9168 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:42.196750    9168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:42.231911    9168 logs.go:123] Gathering logs for kube-apiserver [d228b4cb54d3] ...
	I0807 11:07:42.231923    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d228b4cb54d3"
	I0807 11:07:42.246184    9168 logs.go:123] Gathering logs for etcd [8bad33df3502] ...
	I0807 11:07:42.246192    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bad33df3502"
	I0807 11:07:42.260160    9168 logs.go:123] Gathering logs for coredns [0e834a4d33b2] ...
	I0807 11:07:42.260172    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e834a4d33b2"
	I0807 11:07:42.274200    9168 logs.go:123] Gathering logs for coredns [6a43bd083386] ...
	I0807 11:07:42.274209    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a43bd083386"
	I0807 11:07:42.285807    9168 logs.go:123] Gathering logs for kube-controller-manager [7a8696ff05a6] ...
	I0807 11:07:42.285822    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8696ff05a6"
	I0807 11:07:42.303690    9168 logs.go:123] Gathering logs for storage-provisioner [9f46b798aefe] ...
	I0807 11:07:42.303700    9168 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f46b798aefe"
	I0807 11:07:44.817037    9168 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:43.903993    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:43.904030    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:48.905218    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:48.905260    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0807 11:07:49.325195    9637 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0807 11:07:49.332366    9637 out.go:177] * Enabled addons: storage-provisioner
	I0807 11:07:49.819165    9168 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:49.824829    9168 out.go:177] 
	W0807 11:07:49.828877    9168 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0807 11:07:49.828890    9168 out.go:239] * 
	W0807 11:07:49.829853    9168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:07:49.840628    9168 out.go:177] 
	I0807 11:07:49.340311    9637 addons.go:510] duration metric: took 30.56758975s for enable addons: enabled=[storage-provisioner]
	I0807 11:07:53.906569    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:53.906608    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:58.908070    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:58.908109    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-08-07 17:58:57 UTC, ends at Wed 2024-08-07 18:08:05 UTC. --
	Aug 07 18:07:51 running-upgrade-210000 dockerd[3187]: time="2024-08-07T18:07:51.211544110Z" level=warning msg="cleanup warnings time=\"2024-08-07T18:07:51Z\" level=info msg=\"starting signal loop\" namespace=moby pid=19198 runtime=io.containerd.runc.v2\n"
	Aug 07 18:07:51 running-upgrade-210000 dockerd[3187]: time="2024-08-07T18:07:51.272392632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:07:51 running-upgrade-210000 dockerd[3187]: time="2024-08-07T18:07:51.272434130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:07:51 running-upgrade-210000 dockerd[3187]: time="2024-08-07T18:07:51.272450170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:07:51 running-upgrade-210000 dockerd[3187]: time="2024-08-07T18:07:51.272514125Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a749d29daef6b6473c69ff3f269fbf09e1f1cb72b0c022697ea44b6032ca61e8 pid=19219 runtime=io.containerd.runc.v2
	Aug 07 18:07:51 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:51Z" level=error msg="ContainerStats resp: {0x40008014c0 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x400074e740 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x400035dc00 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x400074ed80 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x400074eec0 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x40007fe800 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x400074f700 linux}"
	Aug 07 18:07:52 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:52Z" level=error msg="ContainerStats resp: {0x40007fe040 linux}"
	Aug 07 18:07:55 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:07:55Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 07 18:08:00 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:00Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 07 18:08:02 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:02Z" level=error msg="ContainerStats resp: {0x400089be80 linux}"
	Aug 07 18:08:02 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:02Z" level=error msg="ContainerStats resp: {0x4000800300 linux}"
	Aug 07 18:08:03 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:03Z" level=error msg="ContainerStats resp: {0x4000801440 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x40004c6dc0 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x40004c7400 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x4000800ec0 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x40004c7940 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x400035c100 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x40000b69c0 linux}"
	Aug 07 18:08:04 running-upgrade-210000 cri-dockerd[3026]: time="2024-08-07T18:08:04Z" level=error msg="ContainerStats resp: {0x400035cbc0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a749d29daef6b       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   86ae9ba67c355
	51d9bb212e497       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   080ab994adf20
	4ea04b21f860a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   86ae9ba67c355
	0e834a4d33b25       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   080ab994adf20
	efde69282e358       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   25711cd4030ae
	9f46b798aefe8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   0704c7d32fbe8
	fc8a20f652010       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   266f85e91aaab
	8bad33df3502e       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   55dc1a5a43377
	7a8696ff05a6b       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   3e525ac1c7866
	d228b4cb54d34       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   6480c712628ce
	
	
	==> coredns [0e834a4d33b2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:48234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:47128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:48036->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:54259->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:46100->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:33851->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:41702->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:48982->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:33310->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 697507194634902197.1111710799573042913. HINFO: read udp 10.244.0.2:43180->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4ea04b21f860] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:38366->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:55111->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:47184->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:40866->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:42232->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:55157->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:40321->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:58758->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5072045996079248570.288224390817088893. HINFO: read udp 10.244.0.3:54566->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [51d9bb212e49] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:45345->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:50337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:58418->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:47870->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:58202->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5006459762704901753.6782663585449227295. HINFO: read udp 10.244.0.2:33456->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a749d29daef6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8742951592776022116.2199553886135616557. HINFO: read udp 10.244.0.3:32906->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8742951592776022116.2199553886135616557. HINFO: read udp 10.244.0.3:55910->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8742951592776022116.2199553886135616557. HINFO: read udp 10.244.0.3:59200->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-210000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-210000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=running-upgrade-210000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T11_03_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:03:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-210000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:08:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:03:48 +0000   Wed, 07 Aug 2024 18:03:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:03:48 +0000   Wed, 07 Aug 2024 18:03:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:03:48 +0000   Wed, 07 Aug 2024 18:03:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:03:48 +0000   Wed, 07 Aug 2024 18:03:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-210000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 553491d6d3d045c2a2e5d5d66715885e
	  System UUID:                553491d6d3d045c2a2e5d5d66715885e
	  Boot ID:                    64be194d-d94f-43fe-8c0c-8b1bca5ebe89
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9hbvm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-hk2bj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-210000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-210000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-210000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-psfs7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-210000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-210000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-210000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-210000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-210000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-210000 event: Registered Node running-upgrade-210000 in Controller
	
	
	==> dmesg <==
	[  +1.804682] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.057566] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.055886] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.150070] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.067248] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.082962] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.793132] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +9.641879] systemd-fstab-generator[1952]: Ignoring "noauto" for root device
	[  +2.737350] systemd-fstab-generator[2226]: Ignoring "noauto" for root device
	[  +0.132788] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.088816] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.079966] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[  +2.450251] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.169872] systemd-fstab-generator[2983]: Ignoring "noauto" for root device
	[  +0.081549] systemd-fstab-generator[2994]: Ignoring "noauto" for root device
	[  +0.068810] systemd-fstab-generator[3005]: Ignoring "noauto" for root device
	[  +0.097943] systemd-fstab-generator[3019]: Ignoring "noauto" for root device
	[  +2.286764] systemd-fstab-generator[3174]: Ignoring "noauto" for root device
	[  +3.328192] systemd-fstab-generator[3565]: Ignoring "noauto" for root device
	[  +1.599786] systemd-fstab-generator[4003]: Ignoring "noauto" for root device
	[ +17.279431] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 7 18:03] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.367958] systemd-fstab-generator[12233]: Ignoring "noauto" for root device
	[  +5.636579] systemd-fstab-generator[12842]: Ignoring "noauto" for root device
	[  +0.460146] systemd-fstab-generator[12975]: Ignoring "noauto" for root device
	
	
	==> etcd [8bad33df3502] <==
	{"level":"info","ts":"2024-08-07T18:03:44.068Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T18:03:44.068Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T18:03:44.067Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-07T18:03:44.068Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-07T18:03:44.068Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-07T18:03:44.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-07T18:03:44.069Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-07T18:03:45.061Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-210000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T18:03:45.063Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-07T18:03:45.062Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T18:03:45.063Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:08:06 up 9 min,  0 users,  load average: 0.53, 0.31, 0.15
	Linux running-upgrade-210000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d228b4cb54d3] <==
	I0807 18:03:46.297039       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0807 18:03:46.297044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 18:03:46.297544       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0807 18:03:46.298585       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 18:03:46.299826       1 cache.go:39] Caches are synced for autoregister controller
	I0807 18:03:46.307443       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0807 18:03:46.316717       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0807 18:03:47.041265       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0807 18:03:47.208708       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0807 18:03:47.212838       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0807 18:03:47.212863       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 18:03:47.352420       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 18:03:47.364750       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 18:03:47.464574       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0807 18:03:47.466626       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0807 18:03:47.466974       1 controller.go:611] quota admission added evaluator for: endpoints
	I0807 18:03:47.468358       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 18:03:48.336426       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0807 18:03:48.607185       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0807 18:03:48.610325       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0807 18:03:48.641130       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0807 18:03:48.664979       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 18:04:02.948418       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0807 18:04:02.997788       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0807 18:04:03.773160       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [7a8696ff05a6] <==
	I0807 18:04:02.147348       1 shared_informer.go:262] Caches are synced for stateful set
	I0807 18:04:02.175178       1 shared_informer.go:262] Caches are synced for persistent volume
	I0807 18:04:02.178288       1 shared_informer.go:262] Caches are synced for expand
	I0807 18:04:02.196910       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0807 18:04:02.199058       1 shared_informer.go:262] Caches are synced for PV protection
	I0807 18:04:02.212555       1 shared_informer.go:262] Caches are synced for job
	I0807 18:04:02.214742       1 shared_informer.go:262] Caches are synced for deployment
	I0807 18:04:02.249164       1 shared_informer.go:262] Caches are synced for cronjob
	I0807 18:04:02.250872       1 shared_informer.go:262] Caches are synced for resource quota
	I0807 18:04:02.298104       1 shared_informer.go:262] Caches are synced for disruption
	I0807 18:04:02.298110       1 disruption.go:371] Sending events to api server.
	I0807 18:04:02.303145       1 shared_informer.go:262] Caches are synced for resource quota
	I0807 18:04:02.346448       1 shared_informer.go:262] Caches are synced for taint
	I0807 18:04:02.346561       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0807 18:04:02.346700       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0807 18:04:02.347209       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-210000. Assuming now as a timestamp.
	I0807 18:04:02.347249       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0807 18:04:02.346803       1 event.go:294] "Event occurred" object="running-upgrade-210000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-210000 event: Registered Node running-upgrade-210000 in Controller"
	I0807 18:04:02.713510       1 shared_informer.go:262] Caches are synced for garbage collector
	I0807 18:04:02.800269       1 shared_informer.go:262] Caches are synced for garbage collector
	I0807 18:04:02.800279       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0807 18:04:02.951273       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-psfs7"
	I0807 18:04:02.998871       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0807 18:04:03.103194       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hk2bj"
	I0807 18:04:03.107804       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9hbvm"
	
	
	==> kube-proxy [efde69282e35] <==
	I0807 18:04:03.752045       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0807 18:04:03.752085       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0807 18:04:03.752113       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0807 18:04:03.770829       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0807 18:04:03.770838       1 server_others.go:206] "Using iptables Proxier"
	I0807 18:04:03.770853       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0807 18:04:03.770959       1 server.go:661] "Version info" version="v1.24.1"
	I0807 18:04:03.770963       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:04:03.771472       1 config.go:317] "Starting service config controller"
	I0807 18:04:03.771481       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0807 18:04:03.771570       1 config.go:226] "Starting endpoint slice config controller"
	I0807 18:04:03.771573       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0807 18:04:03.772017       1 config.go:444] "Starting node config controller"
	I0807 18:04:03.772020       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0807 18:04:03.872530       1 shared_informer.go:262] Caches are synced for node config
	I0807 18:04:03.872547       1 shared_informer.go:262] Caches are synced for service config
	I0807 18:04:03.872559       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc8a20f65201] <==
	W0807 18:03:46.257868       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 18:03:46.257879       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 18:03:46.257895       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 18:03:46.257899       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 18:03:46.257973       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:03:46.257989       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:03:46.258589       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:03:46.258600       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:03:46.258614       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:03:46.258639       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 18:03:46.258658       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:03:46.258663       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:03:47.105686       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:03:47.105791       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 18:03:47.162228       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:03:47.162278       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:03:47.173035       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:03:47.173118       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:03:47.228173       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:03:47.228195       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:03:47.270542       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:03:47.270582       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:03:47.307852       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:03:47.307948       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0807 18:03:47.554300       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-08-07 17:58:57 UTC, ends at Wed 2024-08-07 18:08:06 UTC. --
	Aug 07 18:03:50 running-upgrade-210000 kubelet[12848]: E0807 18:03:50.645152   12848 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-210000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-210000"
	Aug 07 18:03:50 running-upgrade-210000 kubelet[12848]: I0807 18:03:50.837432   12848 request.go:601] Waited for 1.125674294s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 07 18:03:50 running-upgrade-210000 kubelet[12848]: E0807 18:03:50.842693   12848 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-210000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-210000"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.086774   12848 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.087068   12848 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.351866   12848 topology_manager.go:200] "Topology Admit Handler"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.492607   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5r8\" (UniqueName: \"kubernetes.io/projected/d4d4b7cf-8ecf-4055-9133-9d2837565e11-kube-api-access-gn5r8\") pod \"storage-provisioner\" (UID: \"d4d4b7cf-8ecf-4055-9133-9d2837565e11\") " pod="kube-system/storage-provisioner"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.492666   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d4d4b7cf-8ecf-4055-9133-9d2837565e11-tmp\") pod \"storage-provisioner\" (UID: \"d4d4b7cf-8ecf-4055-9133-9d2837565e11\") " pod="kube-system/storage-provisioner"
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: E0807 18:04:02.596082   12848 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: E0807 18:04:02.596154   12848 projected.go:192] Error preparing data for projected volume kube-api-access-gn5r8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: E0807 18:04:02.596201   12848 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d4d4b7cf-8ecf-4055-9133-9d2837565e11-kube-api-access-gn5r8 podName:d4d4b7cf-8ecf-4055-9133-9d2837565e11 nodeName:}" failed. No retries permitted until 2024-08-07 18:04:03.096188484 +0000 UTC m=+14.499383021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gn5r8" (UniqueName: "kubernetes.io/projected/d4d4b7cf-8ecf-4055-9133-9d2837565e11-kube-api-access-gn5r8") pod "storage-provisioner" (UID: "d4d4b7cf-8ecf-4055-9133-9d2837565e11") : configmap "kube-root-ca.crt" not found
	Aug 07 18:04:02 running-upgrade-210000 kubelet[12848]: I0807 18:04:02.954271   12848 topology_manager.go:200] "Topology Admit Handler"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.097020   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7-lib-modules\") pod \"kube-proxy-psfs7\" (UID: \"6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7\") " pod="kube-system/kube-proxy-psfs7"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.097271   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9f75\" (UniqueName: \"kubernetes.io/projected/6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7-kube-api-access-r9f75\") pod \"kube-proxy-psfs7\" (UID: \"6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7\") " pod="kube-system/kube-proxy-psfs7"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.097301   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7-xtables-lock\") pod \"kube-proxy-psfs7\" (UID: \"6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7\") " pod="kube-system/kube-proxy-psfs7"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.097327   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7-kube-proxy\") pod \"kube-proxy-psfs7\" (UID: \"6fa7a9b9-7873-4166-a9fd-35bf27b8d6f7\") " pod="kube-system/kube-proxy-psfs7"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.105774   12848 topology_manager.go:200] "Topology Admit Handler"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.108184   12848 topology_manager.go:200] "Topology Admit Handler"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.299097   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b0e3a4c-a06c-45db-a079-ee2b5eaf4014-config-volume\") pod \"coredns-6d4b75cb6d-9hbvm\" (UID: \"3b0e3a4c-a06c-45db-a079-ee2b5eaf4014\") " pod="kube-system/coredns-6d4b75cb6d-9hbvm"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.299124   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2b33bd0-b9b3-4b01-938a-09eeabb96993-config-volume\") pod \"coredns-6d4b75cb6d-hk2bj\" (UID: \"a2b33bd0-b9b3-4b01-938a-09eeabb96993\") " pod="kube-system/coredns-6d4b75cb6d-hk2bj"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.299135   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzgw\" (UniqueName: \"kubernetes.io/projected/a2b33bd0-b9b3-4b01-938a-09eeabb96993-kube-api-access-8hzgw\") pod \"coredns-6d4b75cb6d-hk2bj\" (UID: \"a2b33bd0-b9b3-4b01-938a-09eeabb96993\") " pod="kube-system/coredns-6d4b75cb6d-hk2bj"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.299151   12848 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnmm\" (UniqueName: \"kubernetes.io/projected/3b0e3a4c-a06c-45db-a079-ee2b5eaf4014-kube-api-access-mvnmm\") pod \"coredns-6d4b75cb6d-9hbvm\" (UID: \"3b0e3a4c-a06c-45db-a079-ee2b5eaf4014\") " pod="kube-system/coredns-6d4b75cb6d-9hbvm"
	Aug 07 18:04:03 running-upgrade-210000 kubelet[12848]: I0807 18:04:03.856101   12848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="080ab994adf20418d9041c1f5a5d814ace42562340e2c4e1d35b2fef960a771e"
	Aug 07 18:07:41 running-upgrade-210000 kubelet[12848]: I0807 18:07:41.343688   12848 scope.go:110] "RemoveContainer" containerID="1ccd3a59766fbc3c21ff2c433c1d6298b7c21e5ce79dbe06602ba4b89b2b34a1"
	Aug 07 18:07:51 running-upgrade-210000 kubelet[12848]: I0807 18:07:51.390347   12848 scope.go:110] "RemoveContainer" containerID="6a43bd0833867dcbba06e965c42047063fe3ee31e8d35060e00614cd70c6a497"
	
	
	==> storage-provisioner [9f46b798aefe] <==
	I0807 18:04:03.438343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 18:04:03.444133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 18:04:03.444149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 18:04:03.447297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 18:04:03.447426       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e8f6c14-c550-4fb2-b890-c9cc26794d51", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-210000_b4011677-7629-4eb8-b690-b16f5bcf5bef became leader
	I0807 18:04:03.447452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-210000_b4011677-7629-4eb8-b690-b16f5bcf5bef!
	I0807 18:04:03.548454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-210000_b4011677-7629-4eb8-b690-b16f5bcf5bef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-210000 -n running-upgrade-210000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-210000 -n running-upgrade-210000: exit status 2 (15.625804375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-210000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-210000
--- FAIL: TestRunningBinaryUpgrade (593.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.992349541s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-465000" primary control-plane node in "kubernetes-upgrade-465000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-465000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:01:28.494517    9555 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:01:28.494650    9555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:01:28.494653    9555 out.go:304] Setting ErrFile to fd 2...
	I0807 11:01:28.494656    9555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:01:28.494789    9555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:01:28.495894    9555 out.go:298] Setting JSON to false
	I0807 11:01:28.512221    9555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5457,"bootTime":1723048231,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:01:28.512300    9555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:01:28.518745    9555 out.go:177] * [kubernetes-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:01:28.524628    9555 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:01:28.524744    9555 notify.go:220] Checking for updates...
	I0807 11:01:28.531528    9555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:01:28.534564    9555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:01:28.537550    9555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:01:28.538942    9555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:01:28.541555    9555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:01:28.544905    9555 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:01:28.544970    9555 config.go:182] Loaded profile config "running-upgrade-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:01:28.545026    9555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:01:28.549422    9555 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:01:28.556559    9555 start.go:297] selected driver: qemu2
	I0807 11:01:28.556565    9555 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:01:28.556570    9555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:01:28.558652    9555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:01:28.561553    9555 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:01:28.564631    9555 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 11:01:28.564651    9555 cni.go:84] Creating CNI manager for ""
	I0807 11:01:28.564658    9555 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 11:01:28.564726    9555 start.go:340] cluster config:
	{Name:kubernetes-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:01:28.568054    9555 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:01:28.574717    9555 out.go:177] * Starting "kubernetes-upgrade-465000" primary control-plane node in "kubernetes-upgrade-465000" cluster
	I0807 11:01:28.578569    9555 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 11:01:28.578585    9555 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 11:01:28.578598    9555 cache.go:56] Caching tarball of preloaded images
	I0807 11:01:28.578662    9555 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:01:28.578667    9555 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 11:01:28.578725    9555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kubernetes-upgrade-465000/config.json ...
	I0807 11:01:28.578736    9555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kubernetes-upgrade-465000/config.json: {Name:mk460fcc8d95473f7d6dcbc11b5f5ab9326ff69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:01:28.579068    9555 start.go:360] acquireMachinesLock for kubernetes-upgrade-465000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:01:28.579099    9555 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "kubernetes-upgrade-465000"
	I0807 11:01:28.579108    9555 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:01:28.579133    9555 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:01:28.587549    9555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:01:28.603066    9555 start.go:159] libmachine.API.Create for "kubernetes-upgrade-465000" (driver="qemu2")
	I0807 11:01:28.603089    9555 client.go:168] LocalClient.Create starting
	I0807 11:01:28.603143    9555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:01:28.603175    9555 main.go:141] libmachine: Decoding PEM data...
	I0807 11:01:28.603184    9555 main.go:141] libmachine: Parsing certificate...
	I0807 11:01:28.603221    9555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:01:28.603247    9555 main.go:141] libmachine: Decoding PEM data...
	I0807 11:01:28.603261    9555 main.go:141] libmachine: Parsing certificate...
	I0807 11:01:28.603726    9555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:01:28.769833    9555 main.go:141] libmachine: Creating SSH key...
	I0807 11:01:28.964676    9555 main.go:141] libmachine: Creating Disk image...
	I0807 11:01:28.964684    9555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:01:28.964945    9555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:28.974695    9555 main.go:141] libmachine: STDOUT: 
	I0807 11:01:28.974719    9555 main.go:141] libmachine: STDERR: 
	I0807 11:01:28.974794    9555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2 +20000M
	I0807 11:01:28.982782    9555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:01:28.982798    9555 main.go:141] libmachine: STDERR: 
	I0807 11:01:28.982812    9555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:28.982823    9555 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:01:28.982834    9555 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:01:28.982862    9555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d6:b4:9a:ff:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:28.984404    9555 main.go:141] libmachine: STDOUT: 
	I0807 11:01:28.984420    9555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:01:28.984438    9555 client.go:171] duration metric: took 381.348125ms to LocalClient.Create
	I0807 11:01:30.986648    9555 start.go:128] duration metric: took 2.407498583s to createHost
	I0807 11:01:30.986726    9555 start.go:83] releasing machines lock for "kubernetes-upgrade-465000", held for 2.407635167s
	W0807 11:01:30.986870    9555 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:01:31.002222    9555 out.go:177] * Deleting "kubernetes-upgrade-465000" in qemu2 ...
	W0807 11:01:31.029141    9555 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:01:31.029192    9555 start.go:729] Will try again in 5 seconds ...
	I0807 11:01:36.031295    9555 start.go:360] acquireMachinesLock for kubernetes-upgrade-465000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:01:36.031613    9555 start.go:364] duration metric: took 264.792µs to acquireMachinesLock for "kubernetes-upgrade-465000"
	I0807 11:01:36.031704    9555 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:01:36.031839    9555 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:01:36.037349    9555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:01:36.071408    9555 start.go:159] libmachine.API.Create for "kubernetes-upgrade-465000" (driver="qemu2")
	I0807 11:01:36.071451    9555 client.go:168] LocalClient.Create starting
	I0807 11:01:36.071587    9555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:01:36.071649    9555 main.go:141] libmachine: Decoding PEM data...
	I0807 11:01:36.071667    9555 main.go:141] libmachine: Parsing certificate...
	I0807 11:01:36.071727    9555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:01:36.071767    9555 main.go:141] libmachine: Decoding PEM data...
	I0807 11:01:36.071780    9555 main.go:141] libmachine: Parsing certificate...
	I0807 11:01:36.072399    9555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:01:36.237282    9555 main.go:141] libmachine: Creating SSH key...
	I0807 11:01:36.397575    9555 main.go:141] libmachine: Creating Disk image...
	I0807 11:01:36.397584    9555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:01:36.397843    9555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:36.407642    9555 main.go:141] libmachine: STDOUT: 
	I0807 11:01:36.407662    9555 main.go:141] libmachine: STDERR: 
	I0807 11:01:36.407724    9555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2 +20000M
	I0807 11:01:36.415705    9555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:01:36.415719    9555 main.go:141] libmachine: STDERR: 
	I0807 11:01:36.415734    9555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:36.415748    9555 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:01:36.415760    9555 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:01:36.415796    9555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:98:f6:27:a7:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:36.417456    9555 main.go:141] libmachine: STDOUT: 
	I0807 11:01:36.417470    9555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:01:36.417482    9555 client.go:171] duration metric: took 346.028292ms to LocalClient.Create
	I0807 11:01:38.419697    9555 start.go:128] duration metric: took 2.387843083s to createHost
	I0807 11:01:38.419794    9555 start.go:83] releasing machines lock for "kubernetes-upgrade-465000", held for 2.388137625s
	W0807 11:01:38.420230    9555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-465000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-465000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:01:38.429977    9555 out.go:177] 
	W0807 11:01:38.437017    9555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:01:38.437039    9555 out.go:239] * 
	* 
	W0807 11:01:38.439521    9555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:01:38.449985    9555 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-465000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-465000: (3.475626708s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-465000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-465000 status --format={{.Host}}: exit status 7 (32.6135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.178172291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-465000" primary control-plane node in "kubernetes-upgrade-465000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:01:41.996744    9589 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:01:41.996878    9589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:01:41.996882    9589 out.go:304] Setting ErrFile to fd 2...
	I0807 11:01:41.996884    9589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:01:41.997015    9589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:01:41.998051    9589 out.go:298] Setting JSON to false
	I0807 11:01:42.015193    9589 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5471,"bootTime":1723048231,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:01:42.015279    9589 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:01:42.018887    9589 out.go:177] * [kubernetes-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:01:42.025870    9589 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:01:42.025912    9589 notify.go:220] Checking for updates...
	I0807 11:01:42.032827    9589 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:01:42.035802    9589 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:01:42.038836    9589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:01:42.041795    9589 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:01:42.044779    9589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:01:42.048024    9589 config.go:182] Loaded profile config "kubernetes-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0807 11:01:42.048269    9589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:01:42.050779    9589 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:01:42.057805    9589 start.go:297] selected driver: qemu2
	I0807 11:01:42.057809    9589 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:01:42.057861    9589 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:01:42.060077    9589 cni.go:84] Creating CNI manager for ""
	I0807 11:01:42.060095    9589 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:01:42.060122    9589 start.go:340] cluster config:
	{Name:kubernetes-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-465000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:01:42.063287    9589 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:01:42.070806    9589 out.go:177] * Starting "kubernetes-upgrade-465000" primary control-plane node in "kubernetes-upgrade-465000" cluster
	I0807 11:01:42.074811    9589 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 11:01:42.074826    9589 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0807 11:01:42.074833    9589 cache.go:56] Caching tarball of preloaded images
	I0807 11:01:42.074886    9589 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:01:42.074891    9589 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0807 11:01:42.074943    9589 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kubernetes-upgrade-465000/config.json ...
	I0807 11:01:42.075292    9589 start.go:360] acquireMachinesLock for kubernetes-upgrade-465000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:01:42.075319    9589 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "kubernetes-upgrade-465000"
	I0807 11:01:42.075328    9589 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:01:42.075335    9589 fix.go:54] fixHost starting: 
	I0807 11:01:42.075448    9589 fix.go:112] recreateIfNeeded on kubernetes-upgrade-465000: state=Stopped err=<nil>
	W0807 11:01:42.075454    9589 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:01:42.083840    9589 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-465000" ...
	I0807 11:01:42.087781    9589 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:01:42.087812    9589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:98:f6:27:a7:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:42.089812    9589 main.go:141] libmachine: STDOUT: 
	I0807 11:01:42.089829    9589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:01:42.089856    9589 fix.go:56] duration metric: took 14.522042ms for fixHost
	I0807 11:01:42.089860    9589 start.go:83] releasing machines lock for "kubernetes-upgrade-465000", held for 14.535959ms
	W0807 11:01:42.089866    9589 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:01:42.089890    9589 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:01:42.089894    9589 start.go:729] Will try again in 5 seconds ...
	I0807 11:01:47.092068    9589 start.go:360] acquireMachinesLock for kubernetes-upgrade-465000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:01:47.092602    9589 start.go:364] duration metric: took 404.042µs to acquireMachinesLock for "kubernetes-upgrade-465000"
	I0807 11:01:47.092701    9589 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:01:47.092722    9589 fix.go:54] fixHost starting: 
	I0807 11:01:47.093456    9589 fix.go:112] recreateIfNeeded on kubernetes-upgrade-465000: state=Stopped err=<nil>
	W0807 11:01:47.093482    9589 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:01:47.095699    9589 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-465000" ...
	I0807 11:01:47.103311    9589 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:01:47.103540    9589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:98:f6:27:a7:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubernetes-upgrade-465000/disk.qcow2
	I0807 11:01:47.113689    9589 main.go:141] libmachine: STDOUT: 
	I0807 11:01:47.113756    9589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:01:47.113847    9589 fix.go:56] duration metric: took 21.130459ms for fixHost
	I0807 11:01:47.113863    9589 start.go:83] releasing machines lock for "kubernetes-upgrade-465000", held for 21.23825ms
	W0807 11:01:47.114067    9589 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:01:47.121237    9589 out.go:177] 
	W0807 11:01:47.124357    9589 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:01:47.124391    9589 out.go:239] * 
	* 
	W0807 11:01:47.127374    9589 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:01:47.134290    9589 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-465000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-465000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-465000 version --output=json: exit status 1 (64.562416ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-465000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-07 11:01:47.214417 -0700 PDT m=+959.809136918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-465000 -n kubernetes-upgrade-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-465000 -n kubernetes-upgrade-465000: exit status 7 (33.091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-465000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-465000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-465000
--- FAIL: TestKubernetesUpgrade (18.86s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19389
- KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1835498236/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19389
- KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4282138045/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (571.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2421533291 start -p stopped-upgrade-423000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2421533291 start -p stopped-upgrade-423000 --memory=2200 --vm-driver=qemu2 : (38.292845125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2421533291 -p stopped-upgrade-423000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2421533291 -p stopped-upgrade-423000 stop: (12.104351417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.348020125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-423000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:02:38.767216    9637 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:02:38.767362    9637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:02:38.767375    9637 out.go:304] Setting ErrFile to fd 2...
	I0807 11:02:38.767378    9637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:02:38.767544    9637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:02:38.768838    9637 out.go:298] Setting JSON to false
	I0807 11:02:38.787763    9637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5527,"bootTime":1723048231,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:02:38.787830    9637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:02:38.792598    9637 out.go:177] * [stopped-upgrade-423000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:02:38.800503    9637 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:02:38.800555    9637 notify.go:220] Checking for updates...
	I0807 11:02:38.807555    9637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:02:38.810574    9637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:02:38.813582    9637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:02:38.816610    9637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:02:38.819612    9637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:02:38.822826    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:02:38.825534    9637 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0807 11:02:38.828590    9637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:02:38.832530    9637 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:02:38.839449    9637 start.go:297] selected driver: qemu2
	I0807 11:02:38.839455    9637 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:02:38.839507    9637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:02:38.842322    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:02:38.842340    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:02:38.842376    9637 start.go:340] cluster config:
	{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:02:38.842439    9637 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:02:38.849540    9637 out.go:177] * Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	I0807 11:02:38.853666    9637 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 11:02:38.853682    9637 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0807 11:02:38.853689    9637 cache.go:56] Caching tarball of preloaded images
	I0807 11:02:38.853748    9637 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:02:38.853753    9637 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0807 11:02:38.853807    9637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0807 11:02:38.854270    9637 start.go:360] acquireMachinesLock for stopped-upgrade-423000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:02:38.854307    9637 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "stopped-upgrade-423000"
	I0807 11:02:38.854315    9637 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:02:38.854323    9637 fix.go:54] fixHost starting: 
	I0807 11:02:38.854435    9637 fix.go:112] recreateIfNeeded on stopped-upgrade-423000: state=Stopped err=<nil>
	W0807 11:02:38.854443    9637 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:02:38.858570    9637 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	I0807 11:02:38.862507    9637 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:02:38.862571    9637 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51441-:22,hostfwd=tcp::51442-:2376,hostname=stopped-upgrade-423000 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/disk.qcow2
	I0807 11:02:38.910174    9637 main.go:141] libmachine: STDOUT: 
	I0807 11:02:38.910202    9637 main.go:141] libmachine: STDERR: 
	I0807 11:02:38.910207    9637 main.go:141] libmachine: Waiting for VM to start (ssh -p 51441 docker@127.0.0.1)...
	I0807 11:02:59.205696    9637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0807 11:02:59.206526    9637 machine.go:94] provisionDockerMachine start ...
	I0807 11:02:59.206677    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.207146    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.207161    9637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 11:02:59.298813    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 11:02:59.298840    9637 buildroot.go:166] provisioning hostname "stopped-upgrade-423000"
	I0807 11:02:59.298949    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.299182    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.299194    9637 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-423000 && echo "stopped-upgrade-423000" | sudo tee /etc/hostname
	I0807 11:02:59.389865    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-423000
	
	I0807 11:02:59.389954    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.390139    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.390153    9637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-423000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-423000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-423000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 11:02:59.471298    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 11:02:59.471318    9637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19389-6671/.minikube CaCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19389-6671/.minikube}
	I0807 11:02:59.471327    9637 buildroot.go:174] setting up certificates
	I0807 11:02:59.471335    9637 provision.go:84] configureAuth start
	I0807 11:02:59.471345    9637 provision.go:143] copyHostCerts
	I0807 11:02:59.471432    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem, removing ...
	I0807 11:02:59.471441    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem
	I0807 11:02:59.471549    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.pem (1082 bytes)
	I0807 11:02:59.471743    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem, removing ...
	I0807 11:02:59.471747    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem
	I0807 11:02:59.471801    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/cert.pem (1123 bytes)
	I0807 11:02:59.471925    9637 exec_runner.go:144] found /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem, removing ...
	I0807 11:02:59.471929    9637 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem
	I0807 11:02:59.471979    9637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19389-6671/.minikube/key.pem (1675 bytes)
	I0807 11:02:59.472072    9637 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-423000 san=[127.0.0.1 localhost minikube stopped-upgrade-423000]
	I0807 11:02:59.555516    9637 provision.go:177] copyRemoteCerts
	I0807 11:02:59.555562    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 11:02:59.555571    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:02:59.593937    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 11:02:59.601042    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0807 11:02:59.607510    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 11:02:59.614131    9637 provision.go:87] duration metric: took 142.791583ms to configureAuth
	I0807 11:02:59.614140    9637 buildroot.go:189] setting minikube options for container-runtime
	I0807 11:02:59.614252    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:02:59.614290    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.614375    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.614382    9637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 11:02:59.685268    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 11:02:59.685279    9637 buildroot.go:70] root file system type: tmpfs
	I0807 11:02:59.685334    9637 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 11:02:59.685397    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.685523    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.685559    9637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 11:02:59.760832    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 11:02:59.760895    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:02:59.761007    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:02:59.761015    9637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 11:03:00.133339    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 11:03:00.133353    9637 machine.go:97] duration metric: took 926.822792ms to provisionDockerMachine
	I0807 11:03:00.133360    9637 start.go:293] postStartSetup for "stopped-upgrade-423000" (driver="qemu2")
	I0807 11:03:00.133366    9637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 11:03:00.133417    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 11:03:00.133427    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:03:00.171921    9637 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 11:03:00.173193    9637 info.go:137] Remote host: Buildroot 2021.02.12
	I0807 11:03:00.173200    9637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/addons for local assets ...
	I0807 11:03:00.173289    9637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19389-6671/.minikube/files for local assets ...
	I0807 11:03:00.173415    9637 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem -> 71662.pem in /etc/ssl/certs
	I0807 11:03:00.173563    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 11:03:00.176258    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /etc/ssl/certs/71662.pem (1708 bytes)
	I0807 11:03:00.182635    9637 start.go:296] duration metric: took 49.270667ms for postStartSetup
	I0807 11:03:00.182649    9637 fix.go:56] duration metric: took 21.328482417s for fixHost
	I0807 11:03:00.182681    9637 main.go:141] libmachine: Using SSH client type: native
	I0807 11:03:00.182794    9637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bdea10] 0x100be1270 <nil>  [] 0s} localhost 51441 <nil> <nil>}
	I0807 11:03:00.182801    9637 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 11:03:00.258449    9637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053780.549436296
	
	I0807 11:03:00.258458    9637 fix.go:216] guest clock: 1723053780.549436296
	I0807 11:03:00.258462    9637 fix.go:229] Guest: 2024-08-07 11:03:00.549436296 -0700 PDT Remote: 2024-08-07 11:03:00.182651 -0700 PDT m=+21.444152959 (delta=366.785296ms)
	I0807 11:03:00.258479    9637 fix.go:200] guest clock delta is within tolerance: 366.785296ms
	I0807 11:03:00.258482    9637 start.go:83] releasing machines lock for "stopped-upgrade-423000", held for 21.404324458s
	I0807 11:03:00.258551    9637 ssh_runner.go:195] Run: cat /version.json
	I0807 11:03:00.258560    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:03:00.258551    9637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 11:03:00.258624    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	W0807 11:03:00.259151    9637 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51441: connect: connection refused
	I0807 11:03:00.259172    9637 retry.go:31] will retry after 206.590189ms: dial tcp [::1]:51441: connect: connection refused
	W0807 11:03:00.295526    9637 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0807 11:03:00.295576    9637 ssh_runner.go:195] Run: systemctl --version
	I0807 11:03:00.297384    9637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 11:03:00.298866    9637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 11:03:00.298889    9637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0807 11:03:00.301911    9637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0807 11:03:00.306586    9637 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 11:03:00.306598    9637 start.go:495] detecting cgroup driver to use...
	I0807 11:03:00.306673    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 11:03:00.314049    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0807 11:03:00.317587    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 11:03:00.320906    9637 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 11:03:00.320933    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 11:03:00.323713    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 11:03:00.326573    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 11:03:00.329953    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 11:03:00.333317    9637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 11:03:00.336447    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 11:03:00.339290    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 11:03:00.347597    9637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 11:03:00.352069    9637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 11:03:00.354797    9637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 11:03:00.357619    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:00.432382    9637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 11:03:00.437998    9637 start.go:495] detecting cgroup driver to use...
	I0807 11:03:00.438072    9637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 11:03:00.443886    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 11:03:00.449137    9637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 11:03:00.456601    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 11:03:00.461253    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 11:03:00.465948    9637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 11:03:00.488745    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 11:03:00.493040    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 11:03:00.498638    9637 ssh_runner.go:195] Run: which cri-dockerd
	I0807 11:03:00.499865    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 11:03:00.502494    9637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 11:03:00.508883    9637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 11:03:00.592384    9637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 11:03:00.656697    9637 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 11:03:00.656766    9637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 11:03:00.661999    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:00.737284    9637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 11:03:01.852786    9637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.115492667s)
	I0807 11:03:01.852849    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 11:03:01.857800    9637 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0807 11:03:01.865915    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 11:03:01.871169    9637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 11:03:01.944635    9637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 11:03:02.004356    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:02.071563    9637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 11:03:02.077626    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 11:03:02.082404    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:02.140879    9637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 11:03:02.179784    9637 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 11:03:02.179858    9637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 11:03:02.182601    9637 start.go:563] Will wait 60s for crictl version
	I0807 11:03:02.182669    9637 ssh_runner.go:195] Run: which crictl
	I0807 11:03:02.184116    9637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 11:03:02.197920    9637 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0807 11:03:02.197986    9637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 11:03:02.214404    9637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 11:03:02.235572    9637 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0807 11:03:02.235657    9637 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0807 11:03:02.236969    9637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 11:03:02.240839    9637 kubeadm.go:883] updating cluster {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0807 11:03:02.240881    9637 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0807 11:03:02.240923    9637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 11:03:02.251470    9637 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 11:03:02.251478    9637 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 11:03:02.251525    9637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 11:03:02.254603    9637 ssh_runner.go:195] Run: which lz4
	I0807 11:03:02.255899    9637 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0807 11:03:02.257050    9637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 11:03:02.257058    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0807 11:03:03.174358    9637 docker.go:649] duration metric: took 918.497167ms to copy over tarball
	I0807 11:03:03.174423    9637 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 11:03:04.340385    9637 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.165957625s)
	I0807 11:03:04.340398    9637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 11:03:04.355530    9637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 11:03:04.358505    9637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0807 11:03:04.363314    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:04.446509    9637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 11:03:05.611464    9637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16494575s)
	I0807 11:03:05.611550    9637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 11:03:05.624573    9637 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 11:03:05.624582    9637 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0807 11:03:05.624587    9637 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0807 11:03:05.629979    9637 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:05.631934    9637 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:05.633568    9637 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:05.633645    9637 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:05.635548    9637 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:05.635752    9637 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:05.636785    9637 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:05.637188    9637 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:05.638550    9637 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:05.638551    9637 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:05.639970    9637 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:05.639973    9637 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0807 11:03:05.641145    9637 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:05.641405    9637 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:05.642330    9637 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0807 11:03:05.643404    9637 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.071424    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.072137    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.077246    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.084142    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.087260    9637 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0807 11:03:06.087281    9637 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.087287    9637 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0807 11:03:06.087299    9637 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.087331    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0807 11:03:06.087331    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0807 11:03:06.093264    9637 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0807 11:03:06.093286    9637 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.093340    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0807 11:03:06.097121    9637 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0807 11:03:06.097144    9637 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.097127    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.097171    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0807 11:03:06.113867    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0807 11:03:06.117428    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0807 11:03:06.117440    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0807 11:03:06.123492    9637 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0807 11:03:06.123628    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.136500    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0807 11:03:06.136510    9637 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0807 11:03:06.136525    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0807 11:03:06.136527    9637 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.136568    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0807 11:03:06.136567    9637 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0807 11:03:06.136579    9637 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0807 11:03:06.136600    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0807 11:03:06.143085    9637 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0807 11:03:06.143106    9637 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.143157    9637 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0807 11:03:06.155680    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0807 11:03:06.155686    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0807 11:03:06.155708    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0807 11:03:06.155797    9637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0807 11:03:06.155798    9637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0807 11:03:06.155799    9637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0807 11:03:06.158098    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0807 11:03:06.158109    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0807 11:03:06.158146    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0807 11:03:06.158154    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0807 11:03:06.158159    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0807 11:03:06.158172    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0807 11:03:06.192815    9637 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0807 11:03:06.192834    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0807 11:03:06.253482    9637 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0807 11:03:06.253593    9637 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.283932    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0807 11:03:06.283960    9637 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0807 11:03:06.283968    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0807 11:03:06.292153    9637 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0807 11:03:06.292178    9637 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.292236    9637 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:03:06.378457    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0807 11:03:06.378468    9637 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0807 11:03:06.378587    9637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0807 11:03:06.391375    9637 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0807 11:03:06.391409    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0807 11:03:06.455062    9637 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0807 11:03:06.455077    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0807 11:03:06.813858    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0807 11:03:06.813881    9637 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0807 11:03:06.813889    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0807 11:03:06.953703    9637 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0807 11:03:06.953743    9637 cache_images.go:92] duration metric: took 1.32915825s to LoadCachedImages
	W0807 11:03:06.953786    9637 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0807 11:03:06.953792    9637 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0807 11:03:06.953841    9637 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-423000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 11:03:06.953913    9637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 11:03:06.968550    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:03:06.968562    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:03:06.968566    9637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 11:03:06.968574    9637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-423000 NodeName:stopped-upgrade-423000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 11:03:06.968639    9637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-423000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 11:03:06.968701    9637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0807 11:03:06.971381    9637 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 11:03:06.971408    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 11:03:06.974082    9637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0807 11:03:06.978928    9637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 11:03:06.985255    9637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0807 11:03:06.990825    9637 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0807 11:03:06.992176    9637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 11:03:06.995582    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:03:07.057135    9637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:03:07.064131    9637 certs.go:68] Setting up /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000 for IP: 10.0.2.15
	I0807 11:03:07.064140    9637 certs.go:194] generating shared ca certs ...
	I0807 11:03:07.064148    9637 certs.go:226] acquiring lock for ca certs: {Name:mkf594adfb50ee91964d2e538bbb4ff47398b8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.064361    9637 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key
	I0807 11:03:07.064415    9637 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key
	I0807 11:03:07.064424    9637 certs.go:256] generating profile certs ...
	I0807 11:03:07.064499    9637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key
	I0807 11:03:07.064517    9637 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81
	I0807 11:03:07.064528    9637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0807 11:03:07.123319    9637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 ...
	I0807 11:03:07.123346    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81: {Name:mkd86c55b851f33026777198b4f1c97f247eadad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.123676    9637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 ...
	I0807 11:03:07.123682    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81: {Name:mka4e55d19e716cda36b012f8d3e655d682732c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.123823    9637 certs.go:381] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt
	I0807 11:03:07.127992    9637 certs.go:385] copying /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key
	I0807 11:03:07.128179    9637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.key
	I0807 11:03:07.128318    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem (1338 bytes)
	W0807 11:03:07.128356    9637 certs.go:480] ignoring /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166_empty.pem, impossibly tiny 0 bytes
	I0807 11:03:07.128362    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca-key.pem (1675 bytes)
	I0807 11:03:07.128381    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem (1082 bytes)
	I0807 11:03:07.128401    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem (1123 bytes)
	I0807 11:03:07.128421    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/key.pem (1675 bytes)
	I0807 11:03:07.128469    9637 certs.go:484] found cert: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem (1708 bytes)
	I0807 11:03:07.128809    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 11:03:07.135691    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 11:03:07.142528    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 11:03:07.149595    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 11:03:07.156314    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0807 11:03:07.163043    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 11:03:07.169604    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 11:03:07.176650    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 11:03:07.184483    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/ssl/certs/71662.pem --> /usr/share/ca-certificates/71662.pem (1708 bytes)
	I0807 11:03:07.191299    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 11:03:07.198159    9637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/7166.pem --> /usr/share/ca-certificates/7166.pem (1338 bytes)
	I0807 11:03:07.205261    9637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 11:03:07.210654    9637 ssh_runner.go:195] Run: openssl version
	I0807 11:03:07.212619    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71662.pem && ln -fs /usr/share/ca-certificates/71662.pem /etc/ssl/certs/71662.pem"
	I0807 11:03:07.215496    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.216862    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:47 /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.216880    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71662.pem
	I0807 11:03:07.218531    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71662.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 11:03:07.222087    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 11:03:07.225668    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.227153    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.227176    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 11:03:07.228911    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 11:03:07.231649    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7166.pem && ln -fs /usr/share/ca-certificates/7166.pem /etc/ssl/certs/7166.pem"
	I0807 11:03:07.234691    9637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.236230    9637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:47 /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.236253    9637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7166.pem
	I0807 11:03:07.237920    9637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7166.pem /etc/ssl/certs/51391683.0"
	I0807 11:03:07.241103    9637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 11:03:07.242539    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 11:03:07.245230    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 11:03:07.247426    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 11:03:07.249514    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 11:03:07.251522    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 11:03:07.253158    9637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 11:03:07.255013    9637 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0807 11:03:07.255081    9637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 11:03:07.265610    9637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 11:03:07.269145    9637 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 11:03:07.269151    9637 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 11:03:07.269180    9637 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 11:03:07.271923    9637 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 11:03:07.272266    9637 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-423000" does not appear in /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:03:07.272363    9637 kubeconfig.go:62] /Users/jenkins/minikube-integration/19389-6671/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-423000" cluster setting kubeconfig missing "stopped-upgrade-423000" context setting]
	I0807 11:03:07.272594    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:03:07.273062    9637 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f73f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:03:07.273405    9637 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 11:03:07.276026    9637 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-423000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0807 11:03:07.276032    9637 kubeadm.go:1160] stopping kube-system containers ...
	I0807 11:03:07.276067    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 11:03:07.286754    9637 docker.go:483] Stopping containers: [18a4d38a0c8c 2d9c10a9a9e1 a895a6c8fd77 56e44fe63415 6b9b69239f16 1afe0fd1fec7 d139dcfead8f 4940a26d001e]
	I0807 11:03:07.286815    9637 ssh_runner.go:195] Run: docker stop 18a4d38a0c8c 2d9c10a9a9e1 a895a6c8fd77 56e44fe63415 6b9b69239f16 1afe0fd1fec7 d139dcfead8f 4940a26d001e
	I0807 11:03:07.297554    9637 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 11:03:07.303109    9637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:03:07.306205    9637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:03:07.306211    9637 kubeadm.go:157] found existing configuration files:
	
	I0807 11:03:07.306235    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0807 11:03:07.309212    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:03:07.309232    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:03:07.311731    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0807 11:03:07.314249    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:03:07.314269    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:03:07.317315    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0807 11:03:07.319796    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:03:07.319820    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:03:07.322383    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0807 11:03:07.325311    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:03:07.325332    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:03:07.327876    9637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:03:07.330541    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:07.352951    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:07.914828    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.026502    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.045663    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 11:03:08.066514    9637 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:03:08.066599    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:08.568657    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:09.068699    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:03:09.072905    9637 api_server.go:72] duration metric: took 1.006399709s to wait for apiserver process to appear ...
	I0807 11:03:09.072913    9637 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:03:09.072922    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:14.074245    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:14.074288    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:19.074945    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:19.075032    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:24.075298    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:24.075374    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:29.075878    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:29.075931    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:34.076581    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:34.076629    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:39.077523    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:39.077572    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:44.078486    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:44.078510    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:49.079620    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:49.079644    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:54.081199    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:54.081271    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:03:59.083492    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:03:59.083532    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:04.085669    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:04.085689    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:09.085960    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:09.086082    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:09.100128    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:09.100198    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:09.111954    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:09.112027    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:09.122298    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:09.122367    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:09.132716    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:09.132787    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:09.142741    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:09.142799    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:09.153515    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:09.153589    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:09.164163    9637 logs.go:276] 0 containers: []
	W0807 11:04:09.164174    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:09.164229    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:09.174538    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:09.174562    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:09.174567    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:09.178831    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:09.178840    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:09.286102    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:09.286113    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:09.300789    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:09.300802    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:09.314628    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:09.314640    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:09.333273    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:09.333283    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:09.349797    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:09.349809    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:09.361913    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:09.361925    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:09.399294    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:09.399303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:09.413909    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:09.413922    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:09.425551    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:09.425564    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:09.439834    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:09.439845    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:09.451549    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:09.451561    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:09.479446    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:09.479457    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:09.495469    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:09.495480    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:09.510175    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:09.510188    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:09.522264    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:09.522274    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:12.047937    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:17.050343    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:17.050546    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:17.072600    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:17.072710    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:17.088385    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:17.088463    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:17.106345    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:17.106409    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:17.117271    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:17.117331    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:17.127579    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:17.127648    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:17.138214    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:17.138281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:17.148545    9637 logs.go:276] 0 containers: []
	W0807 11:04:17.148557    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:17.148608    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:17.163049    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:17.163066    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:17.163072    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:17.167553    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:17.167562    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:17.181326    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:17.181339    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:17.218495    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:17.218504    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:17.242592    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:17.242601    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:17.253573    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:17.253585    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:17.268218    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:17.268228    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:17.293655    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:17.293666    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:17.307703    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:17.307713    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:17.321871    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:17.321887    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:17.336405    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:17.336414    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:17.354687    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:17.354697    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:17.366613    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:17.366624    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:17.385788    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:17.385798    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:17.424306    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:17.424317    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:17.435646    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:17.435660    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:17.449116    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:17.449132    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:19.960845    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:24.963311    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:24.963549    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:24.978435    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:24.978504    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:24.989570    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:24.989630    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:25.001044    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:25.001112    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:25.014420    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:25.014485    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:25.024727    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:25.024789    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:25.035322    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:25.035387    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:25.048966    9637 logs.go:276] 0 containers: []
	W0807 11:04:25.048980    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:25.049037    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:25.059472    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:25.059503    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:25.059509    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:25.071527    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:25.071538    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:25.083605    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:25.083615    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:25.121213    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:25.121222    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:25.125293    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:25.125299    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:25.139439    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:25.139450    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:25.157000    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:25.157011    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:25.169167    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:25.169176    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:25.194443    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:25.194459    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:25.208861    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:25.208871    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:25.220453    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:25.220468    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:25.247592    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:25.247608    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:25.286365    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:25.286382    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:25.300598    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:25.300613    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:25.315238    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:25.315252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:25.331140    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:25.331152    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:25.346362    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:25.346377    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:27.862306    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:32.864901    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:32.865349    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:32.904989    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:32.905126    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:32.926120    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:32.926226    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:32.941483    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:32.941556    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:32.954199    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:32.954271    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:32.967393    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:32.967463    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:32.978629    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:32.978723    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:32.989474    9637 logs.go:276] 0 containers: []
	W0807 11:04:32.989484    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:32.989538    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:33.000668    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:33.000687    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:33.000693    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:33.014804    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:33.014815    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:33.039664    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:33.039677    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:33.055558    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:33.055574    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:33.068246    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:33.068258    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:33.105591    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:33.105602    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:33.120245    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:33.120254    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:33.131469    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:33.131481    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:33.143201    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:33.143212    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:33.169453    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:33.169461    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:33.173293    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:33.173299    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:33.190284    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:33.190294    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:33.215803    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:33.215814    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:33.230131    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:33.230141    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:33.246354    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:33.246364    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:33.261095    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:33.261108    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:33.297599    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:33.297610    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:35.814211    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:40.816757    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:40.817195    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:40.851613    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:40.851759    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:40.873560    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:40.873656    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:40.888412    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:40.888477    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:40.900559    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:40.900631    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:40.912267    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:40.912339    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:40.923379    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:40.923449    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:40.934256    9637 logs.go:276] 0 containers: []
	W0807 11:04:40.934277    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:40.934338    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:40.945257    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:40.945274    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:40.945280    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:40.966225    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:40.966238    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:40.985418    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:40.985429    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:41.000001    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:41.000011    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:41.025248    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:41.025255    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:41.063173    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:41.063184    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:41.078171    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:41.078183    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:41.089510    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:41.089519    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:41.126631    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:41.126644    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:41.141246    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:41.141257    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:41.154076    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:41.154087    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:41.166025    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:41.166040    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:41.178300    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:41.178309    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:41.182268    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:41.182276    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:41.198033    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:41.198047    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:41.216627    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:41.216638    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:41.248550    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:41.248562    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:43.760461    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:48.762361    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:48.762476    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:48.773648    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:48.773720    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:48.785009    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:48.785089    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:48.796212    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:48.796281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:48.807119    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:48.807185    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:48.817454    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:48.817514    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:48.828166    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:48.828230    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:48.838658    9637 logs.go:276] 0 containers: []
	W0807 11:04:48.838669    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:48.838721    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:48.851912    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:48.851934    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:48.851940    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:48.891416    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:48.891423    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:48.927675    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:48.927687    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:48.952917    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:48.952928    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:48.969997    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:48.970008    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:48.984343    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:48.984356    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:49.002989    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:49.003002    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:49.020106    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:49.020120    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:49.025526    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:49.025546    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:49.038761    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:49.038772    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:49.057788    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:49.057800    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:49.083070    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:49.083083    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:49.098236    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:49.098253    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:49.113502    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:49.113517    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:49.125564    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:49.125577    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:49.141307    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:49.141325    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:49.157049    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:49.157061    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:51.672755    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:04:56.673960    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:04:56.674075    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:04:56.685591    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:04:56.685670    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:04:56.696428    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:04:56.696495    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:04:56.711350    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:04:56.711422    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:04:56.722670    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:04:56.722744    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:04:56.733333    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:04:56.733401    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:04:56.743639    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:04:56.743704    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:04:56.757965    9637 logs.go:276] 0 containers: []
	W0807 11:04:56.757979    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:04:56.758030    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:04:56.768784    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:04:56.768800    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:04:56.768806    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:04:56.793532    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:04:56.793544    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:04:56.808292    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:04:56.808303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:04:56.825157    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:04:56.825166    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:04:56.840595    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:04:56.840613    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:04:56.845335    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:04:56.845347    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:04:56.883691    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:04:56.883710    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:04:56.901362    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:04:56.901376    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:04:56.913989    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:04:56.914001    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:04:56.930917    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:04:56.930928    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:04:56.957134    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:04:56.957147    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:04:56.997865    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:04:56.997880    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:04:57.012723    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:04:57.012736    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:04:57.028634    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:04:57.028643    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:04:57.041767    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:04:57.041776    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:04:57.053977    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:04:57.053990    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:04:57.069786    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:04:57.069797    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:04:59.588158    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:04.590718    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:04.590899    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:04.607753    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:04.607838    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:04.619122    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:04.619185    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:04.630193    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:04.630259    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:04.640803    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:04.640868    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:04.651456    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:04.651516    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:04.662827    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:04.662889    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:04.672984    9637 logs.go:276] 0 containers: []
	W0807 11:05:04.672995    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:04.673047    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:04.684506    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:04.684524    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:04.684530    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:04.696894    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:04.696904    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:04.701071    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:04.701080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:04.716294    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:04.716304    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:04.744484    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:04.744501    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:04.757404    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:04.757416    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:04.795113    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:04.795125    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:04.807109    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:04.807122    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:04.848068    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:04.848090    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:04.863175    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:04.863188    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:04.879474    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:04.879488    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:04.902875    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:04.902886    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:04.927906    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:04.927918    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:04.940685    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:04.940693    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:04.957257    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:04.957272    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:04.973591    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:04.973604    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:04.993438    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:04.993449    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:07.507799    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:12.510092    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:12.510434    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:12.548560    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:12.548676    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:12.575156    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:12.575215    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:12.594364    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:12.594433    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:12.607328    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:12.607365    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:12.619503    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:12.619555    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:12.636290    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:12.636358    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:12.647666    9637 logs.go:276] 0 containers: []
	W0807 11:05:12.647675    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:12.647734    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:12.662877    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:12.662893    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:12.662905    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:12.667768    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:12.667780    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:12.684204    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:12.684217    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:12.696482    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:12.696494    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:12.709117    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:12.709133    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:12.728219    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:12.728229    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:12.744215    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:12.744229    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:12.759049    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:12.759062    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:12.774003    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:12.774014    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:12.790045    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:12.790056    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:12.805353    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:12.805362    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:12.817956    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:12.817967    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:12.861152    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:12.861162    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:12.897737    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:12.897749    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:12.922239    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:12.922252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:12.933662    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:12.933672    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:12.945437    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:12.945450    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:15.470962    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:20.473106    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:20.473186    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:20.484884    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:20.484955    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:20.496344    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:20.496418    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:20.507470    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:20.507535    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:20.518737    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:20.518809    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:20.530172    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:20.530241    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:20.541507    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:20.541579    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:20.553211    9637 logs.go:276] 0 containers: []
	W0807 11:05:20.553224    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:20.553292    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:20.564572    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:20.564590    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:20.564596    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:20.605221    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:20.605234    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:20.609778    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:20.609790    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:20.642138    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:20.642148    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:20.657124    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:20.657133    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:20.672870    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:20.672879    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:20.709915    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:20.709924    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:20.725369    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:20.725380    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:20.741591    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:20.741608    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:20.755415    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:20.755426    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:20.773025    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:20.773036    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:20.787024    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:20.787036    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:20.798942    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:20.798958    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:20.824091    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:20.824098    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:20.835817    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:20.835828    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:20.853511    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:20.853520    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:20.874424    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:20.874434    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:23.387830    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:28.390097    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:28.390168    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:28.402171    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:28.402250    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:28.414213    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:28.414277    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:28.425311    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:28.425386    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:28.436834    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:28.436913    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:28.449322    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:28.449391    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:28.461039    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:28.461111    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:28.471890    9637 logs.go:276] 0 containers: []
	W0807 11:05:28.471903    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:28.471968    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:28.483700    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:28.483715    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:28.483720    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:28.502381    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:28.502395    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:28.522930    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:28.522941    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:28.537421    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:28.537432    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:28.552638    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:28.552649    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:28.569592    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:28.569604    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:28.581747    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:28.581761    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:28.608451    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:28.608468    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:28.625208    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:28.625219    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:28.640256    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:28.640268    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:28.652554    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:28.652565    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:28.691343    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:28.691350    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:28.695951    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:28.695958    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:28.708539    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:28.708550    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:28.723536    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:28.723546    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:28.748505    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:28.748512    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:28.787893    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:28.787905    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:31.309474    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:36.312027    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:36.312137    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:36.328806    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:36.328874    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:36.340690    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:36.340759    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:36.352710    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:36.352779    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:36.364193    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:36.364263    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:36.376416    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:36.376506    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:36.388149    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:36.388221    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:36.398959    9637 logs.go:276] 0 containers: []
	W0807 11:05:36.398969    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:36.399027    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:36.411024    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:36.411039    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:36.411044    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:36.425453    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:36.425462    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:36.441565    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:36.441574    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:36.455971    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:36.455985    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:36.468955    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:36.468967    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:36.481244    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:36.481256    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:36.495138    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:36.495152    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:36.521847    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:36.521858    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:36.533931    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:36.533945    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:36.572385    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:36.572393    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:36.576728    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:36.576735    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:36.604323    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:36.604335    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:36.621477    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:36.621488    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:36.642385    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:36.642397    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:36.676801    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:36.676813    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:36.693905    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:36.693916    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:36.707999    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:36.708008    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:39.224419    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:44.225890    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:44.225964    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:44.237003    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:44.237070    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:44.252791    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:44.252856    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:44.264643    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:44.264711    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:44.276194    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:44.276276    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:44.294070    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:44.294143    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:44.306264    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:44.306336    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:44.322556    9637 logs.go:276] 0 containers: []
	W0807 11:05:44.322569    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:44.322629    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:44.333992    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:44.334009    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:44.334014    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:44.346795    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:44.346806    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:44.384889    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:44.384903    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:44.396092    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:44.396103    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:44.410235    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:44.410247    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:44.425046    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:44.425059    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:44.450205    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:44.450215    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:44.461579    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:44.461592    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:44.472795    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:44.472806    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:44.484249    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:44.484260    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:44.488676    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:44.488685    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:44.505575    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:44.505589    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:44.521216    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:44.521226    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:44.537019    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:44.537032    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:44.574626    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:44.574636    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:44.597564    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:44.597577    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:44.614773    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:44.614787    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:47.140243    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:05:52.141601    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0807 11:05:52.141680    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:05:52.153336    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:05:52.153413    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:05:52.164440    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:05:52.164511    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:05:52.175978    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:05:52.176063    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:05:52.187468    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:05:52.187545    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:05:52.197916    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:05:52.197987    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:05:52.209523    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:05:52.209594    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:05:52.220063    9637 logs.go:276] 0 containers: []
	W0807 11:05:52.220075    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:05:52.220132    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:05:52.230923    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:05:52.230944    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:05:52.230949    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:05:52.245776    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:05:52.245789    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:05:52.260117    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:05:52.260129    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:05:52.272132    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:05:52.272143    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:05:52.311553    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:05:52.311564    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:05:52.349447    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:05:52.349457    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:05:52.363788    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:05:52.363798    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:05:52.377627    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:05:52.377641    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:05:52.392646    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:05:52.392655    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:05:52.406514    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:05:52.406525    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:05:52.424212    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:05:52.424222    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:05:52.449108    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:05:52.449117    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:05:52.453244    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:05:52.453251    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:05:52.477481    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:05:52.477491    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:05:52.492806    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:05:52.492818    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:05:52.504374    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:05:52.504386    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:05:52.521662    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:05:52.521672    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:05:55.035702    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:00.038302    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:00.038398    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:00.049210    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:00.049281    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:00.059573    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:00.059652    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:00.070194    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:00.070262    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:00.080719    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:00.080790    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:00.091470    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:00.091535    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:00.102018    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:00.102092    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:00.112107    9637 logs.go:276] 0 containers: []
	W0807 11:06:00.112117    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:00.112168    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:00.122239    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:00.122257    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:00.122263    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:00.136372    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:00.136382    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:00.151583    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:00.151593    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:00.163449    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:00.163459    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:00.175110    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:00.175120    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:00.187263    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:00.187273    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:00.191722    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:00.191731    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:00.205689    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:00.205699    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:00.227914    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:00.227924    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:00.265860    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:00.265870    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:00.303069    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:00.303080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:00.317272    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:00.317285    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:00.328557    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:00.328570    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:00.346145    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:00.346155    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:00.371919    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:00.371931    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:00.386403    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:00.386414    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:00.402312    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:00.402322    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:02.915897    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:07.918098    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:07.918186    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:07.928762    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:07.928835    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:07.939451    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:07.939525    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:07.950366    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:07.950434    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:07.961867    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:07.961940    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:07.973038    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:07.973112    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:07.984074    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:07.984143    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:07.994866    9637 logs.go:276] 0 containers: []
	W0807 11:06:07.994876    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:07.994933    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:08.005458    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:08.005476    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:08.005482    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:08.023795    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:08.023807    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:08.038581    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:08.038592    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:08.049620    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:08.049634    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:08.060919    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:08.060929    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:08.064966    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:08.064972    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:08.090100    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:08.090110    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:08.108514    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:08.108523    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:08.132814    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:08.132824    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:08.143891    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:08.143903    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:08.161364    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:08.161375    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:08.195227    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:08.195240    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:08.209358    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:08.209371    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:08.224043    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:08.224057    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:08.235726    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:08.235737    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:08.247821    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:08.247834    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:08.285920    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:08.285937    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:10.802076    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:15.804324    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:15.804430    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:15.815953    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:15.816024    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:15.830633    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:15.830703    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:15.842938    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:15.843015    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:15.854164    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:15.854241    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:15.864980    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:15.865050    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:15.876204    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:15.876271    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:15.885897    9637 logs.go:276] 0 containers: []
	W0807 11:06:15.885908    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:15.885960    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:15.896203    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:15.896224    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:15.896231    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:15.918680    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:15.918688    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:15.922928    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:15.922936    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:15.942290    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:15.942300    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:15.966774    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:15.966792    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:15.982564    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:15.982578    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:16.000764    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:16.000774    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:16.012286    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:16.012298    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:16.024883    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:16.024894    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:16.037524    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:16.037535    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:16.076489    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:16.076497    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:16.110093    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:16.110104    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:16.124843    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:16.124855    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:16.138072    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:16.138083    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:16.152521    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:16.152530    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:16.164761    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:16.164773    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:16.179470    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:16.179480    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:18.692648    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:23.694879    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:23.695076    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:23.706345    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:23.706420    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:23.716644    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:23.716716    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:23.727169    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:23.727240    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:23.737891    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:23.737957    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:23.748272    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:23.748342    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:23.758728    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:23.758795    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:23.768657    9637 logs.go:276] 0 containers: []
	W0807 11:06:23.768667    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:23.768717    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:23.779314    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:23.779333    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:23.779340    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:23.784215    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:23.784222    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:23.795427    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:23.795438    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:23.808226    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:23.808239    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:23.851651    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:23.851662    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:23.866314    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:23.866323    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:23.881853    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:23.881863    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:23.920319    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:23.920328    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:23.934498    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:23.934507    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:23.946598    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:23.946608    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:23.957666    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:23.957679    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:23.974487    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:23.974497    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:24.000290    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:24.000303    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:24.015184    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:24.015200    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:24.032621    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:24.032631    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:24.047403    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:24.047412    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:24.059107    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:24.059117    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:26.585448    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:31.582598    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:31.582741    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:31.594025    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:31.594094    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:31.604930    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:31.605006    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:31.615140    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:31.615212    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:31.625833    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:31.625900    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:31.636312    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:31.636378    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:31.647218    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:31.647288    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:31.657316    9637 logs.go:276] 0 containers: []
	W0807 11:06:31.657326    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:31.657381    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:31.667945    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:31.667962    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:31.667967    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:31.688481    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:31.688490    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:31.712000    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:31.712007    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:31.736684    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:31.736694    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:31.755781    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:31.755791    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:31.767222    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:31.767233    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:31.806562    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:31.806573    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:31.821804    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:31.821815    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:31.841371    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:31.841380    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:31.854227    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:31.854238    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:31.868703    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:31.868716    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:31.908225    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:31.908245    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:31.926815    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:31.926836    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:31.944472    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:31.944486    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:31.959383    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:31.959395    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:31.977127    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:31.977137    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:31.982073    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:31.982080    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:34.501360    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:39.498557    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:39.498728    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:39.511692    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:39.511760    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:39.522142    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:39.522211    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:39.533013    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:39.533086    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:39.543336    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:39.543408    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:39.553580    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:39.553637    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:39.564980    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:39.565036    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:39.575350    9637 logs.go:276] 0 containers: []
	W0807 11:06:39.575362    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:39.575413    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:39.586142    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:39.586161    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:39.586167    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:39.609321    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:39.609330    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:39.613854    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:39.613861    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:39.651190    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:39.651201    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:39.665125    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:39.665139    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:39.677086    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:39.677097    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:39.693792    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:39.693805    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:39.731245    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:39.731253    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:39.744981    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:39.744994    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:39.759547    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:39.759556    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:39.775184    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:39.775196    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:39.789558    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:39.789571    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:39.804143    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:39.804153    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:39.816007    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:39.816021    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:39.827267    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:39.827278    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:39.857056    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:39.857071    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:39.872014    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:39.872027    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:42.383840    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:47.383178    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:47.383262    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:47.394483    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:47.394558    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:47.405258    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:47.405333    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:47.415592    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:47.415657    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:47.426547    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:47.426617    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:47.437262    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:47.437330    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:47.448286    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:47.448352    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:47.458202    9637 logs.go:276] 0 containers: []
	W0807 11:06:47.458215    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:47.458273    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:47.467993    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:47.468010    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:47.468015    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:47.472761    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:47.472768    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:47.506000    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:47.506012    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:47.520171    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:47.520183    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:47.540545    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:47.540555    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:47.551658    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:47.551670    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:47.563038    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:47.563049    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:47.586282    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:47.586292    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:47.616248    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:47.616260    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:47.633219    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:47.633232    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:47.656714    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:47.656722    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:47.669802    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:47.669815    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:47.708240    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:47.708252    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:47.722105    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:47.722118    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:47.735029    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:47.735039    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:47.752696    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:47.752706    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:47.767791    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:47.767804    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:50.282925    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:06:55.283331    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:06:55.283516    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:06:55.295564    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:06:55.295635    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:06:55.306356    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:06:55.306428    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:06:55.323165    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:06:55.323231    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:06:55.333680    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:06:55.333747    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:06:55.347637    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:06:55.347707    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:06:55.359445    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:06:55.359509    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:06:55.369234    9637 logs.go:276] 0 containers: []
	W0807 11:06:55.369245    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:06:55.369303    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:06:55.380007    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:06:55.380028    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:06:55.380033    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:06:55.394447    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:06:55.394457    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:06:55.406479    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:06:55.406489    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:06:55.431481    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:06:55.431491    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:06:55.453350    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:06:55.453359    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:06:55.468487    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:06:55.468498    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:06:55.480119    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:06:55.480129    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:06:55.494045    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:06:55.494055    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:06:55.507529    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:06:55.507539    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:06:55.518763    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:06:55.518775    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:06:55.556597    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:06:55.556608    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:06:55.571425    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:06:55.571440    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:06:55.588999    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:06:55.589009    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:06:55.600952    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:06:55.600964    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:06:55.637956    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:06:55.637964    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:06:55.642057    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:06:55.642065    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:06:55.656481    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:06:55.656491    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:06:58.180137    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:03.181250    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:03.181417    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:07:03.193205    9637 logs.go:276] 2 containers: [a06f2d638f11 a895a6c8fd77]
	I0807 11:07:03.193272    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:07:03.231145    9637 logs.go:276] 2 containers: [d08942694678 18a4d38a0c8c]
	I0807 11:07:03.231213    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:07:03.241654    9637 logs.go:276] 1 containers: [4c6cc0aa8ad2]
	I0807 11:07:03.241716    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:07:03.252534    9637 logs.go:276] 2 containers: [1eafeb79fa7e 2d9c10a9a9e1]
	I0807 11:07:03.252599    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:07:03.263181    9637 logs.go:276] 1 containers: [2e71d24f909a]
	I0807 11:07:03.263245    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:07:03.274428    9637 logs.go:276] 2 containers: [48983944b9f0 56e44fe63415]
	I0807 11:07:03.274489    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:07:03.284675    9637 logs.go:276] 0 containers: []
	W0807 11:07:03.284686    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:07:03.284736    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:07:03.295027    9637 logs.go:276] 2 containers: [d7b2601a2a06 5b6b286aa22e]
	I0807 11:07:03.295049    9637 logs.go:123] Gathering logs for kube-controller-manager [56e44fe63415] ...
	I0807 11:07:03.295055    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e44fe63415"
	I0807 11:07:03.315928    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:07:03.315938    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:07:03.351419    9637 logs.go:123] Gathering logs for etcd [d08942694678] ...
	I0807 11:07:03.351433    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08942694678"
	I0807 11:07:03.365628    9637 logs.go:123] Gathering logs for coredns [4c6cc0aa8ad2] ...
	I0807 11:07:03.365638    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c6cc0aa8ad2"
	I0807 11:07:03.379588    9637 logs.go:123] Gathering logs for kube-scheduler [2d9c10a9a9e1] ...
	I0807 11:07:03.379600    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9c10a9a9e1"
	I0807 11:07:03.394925    9637 logs.go:123] Gathering logs for kube-controller-manager [48983944b9f0] ...
	I0807 11:07:03.394938    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48983944b9f0"
	I0807 11:07:03.412900    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:07:03.412910    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:07:03.425739    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:07:03.425749    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:07:03.464688    9637 logs.go:123] Gathering logs for kube-scheduler [1eafeb79fa7e] ...
	I0807 11:07:03.464700    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eafeb79fa7e"
	I0807 11:07:03.478744    9637 logs.go:123] Gathering logs for kube-proxy [2e71d24f909a] ...
	I0807 11:07:03.478753    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e71d24f909a"
	I0807 11:07:03.490467    9637 logs.go:123] Gathering logs for storage-provisioner [5b6b286aa22e] ...
	I0807 11:07:03.490476    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6b286aa22e"
	I0807 11:07:03.501858    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:07:03.501868    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:07:03.523650    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:07:03.523658    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:07:03.527813    9637 logs.go:123] Gathering logs for kube-apiserver [a895a6c8fd77] ...
	I0807 11:07:03.527819    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a895a6c8fd77"
	I0807 11:07:03.553017    9637 logs.go:123] Gathering logs for etcd [18a4d38a0c8c] ...
	I0807 11:07:03.553030    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18a4d38a0c8c"
	I0807 11:07:03.567728    9637 logs.go:123] Gathering logs for storage-provisioner [d7b2601a2a06] ...
	I0807 11:07:03.567739    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b2601a2a06"
	I0807 11:07:03.579681    9637 logs.go:123] Gathering logs for kube-apiserver [a06f2d638f11] ...
	I0807 11:07:03.579692    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06f2d638f11"
	I0807 11:07:06.093452    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:11.094987    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:11.095036    9637 kubeadm.go:597] duration metric: took 4m3.852897292s to restartPrimaryControlPlane
	W0807 11:07:11.095115    9637 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0807 11:07:11.095145    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0807 11:07:12.128041    9637 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.033005875s)
	I0807 11:07:12.128124    9637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 11:07:12.132930    9637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 11:07:12.135560    9637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 11:07:12.138157    9637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 11:07:12.138162    9637 kubeadm.go:157] found existing configuration files:
	
	I0807 11:07:12.138179    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0807 11:07:12.140756    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 11:07:12.140782    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 11:07:12.143278    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0807 11:07:12.145763    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 11:07:12.145785    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 11:07:12.148859    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0807 11:07:12.151264    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 11:07:12.151285    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 11:07:12.153904    9637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0807 11:07:12.156979    9637 kubeadm.go:163] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 11:07:12.157001    9637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 11:07:12.159544    9637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 11:07:12.177293    9637 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0807 11:07:12.177420    9637 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 11:07:12.225439    9637 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 11:07:12.225495    9637 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 11:07:12.225581    9637 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 11:07:12.277221    9637 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 11:07:12.280426    9637 out.go:204]   - Generating certificates and keys ...
	I0807 11:07:12.280456    9637 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 11:07:12.280493    9637 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 11:07:12.280528    9637 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 11:07:12.280553    9637 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0807 11:07:12.280612    9637 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0807 11:07:12.280647    9637 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0807 11:07:12.280712    9637 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0807 11:07:12.280748    9637 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0807 11:07:12.280783    9637 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 11:07:12.280823    9637 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 11:07:12.280846    9637 kubeadm.go:310] [certs] Using the existing "sa" key
	I0807 11:07:12.280876    9637 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 11:07:12.359064    9637 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 11:07:12.437833    9637 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 11:07:12.510691    9637 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 11:07:12.649138    9637 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 11:07:12.683389    9637 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 11:07:12.683888    9637 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 11:07:12.683940    9637 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 11:07:12.770381    9637 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 11:07:12.773711    9637 out.go:204]   - Booting up control plane ...
	I0807 11:07:12.773762    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 11:07:12.773806    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 11:07:12.773839    9637 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 11:07:12.773939    9637 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 11:07:12.774023    9637 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 11:07:17.272225    9637 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501377 seconds
	I0807 11:07:17.272285    9637 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 11:07:17.275841    9637 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 11:07:17.791679    9637 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 11:07:17.792113    9637 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-423000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 11:07:18.296542    9637 kubeadm.go:310] [bootstrap-token] Using token: uoe6y8.pluqtcpnydqamgb7
	I0807 11:07:18.302899    9637 out.go:204]   - Configuring RBAC rules ...
	I0807 11:07:18.302964    9637 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 11:07:18.303019    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 11:07:18.309571    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 11:07:18.310462    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 11:07:18.311379    9637 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 11:07:18.322082    9637 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 11:07:18.334869    9637 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 11:07:18.530908    9637 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 11:07:18.700860    9637 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 11:07:18.701341    9637 kubeadm.go:310] 
	I0807 11:07:18.701370    9637 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 11:07:18.701374    9637 kubeadm.go:310] 
	I0807 11:07:18.701408    9637 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 11:07:18.701417    9637 kubeadm.go:310] 
	I0807 11:07:18.701432    9637 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 11:07:18.701464    9637 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 11:07:18.701546    9637 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 11:07:18.701561    9637 kubeadm.go:310] 
	I0807 11:07:18.701641    9637 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 11:07:18.701651    9637 kubeadm.go:310] 
	I0807 11:07:18.701705    9637 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 11:07:18.701711    9637 kubeadm.go:310] 
	I0807 11:07:18.701783    9637 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 11:07:18.701833    9637 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 11:07:18.701882    9637 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 11:07:18.701888    9637 kubeadm.go:310] 
	I0807 11:07:18.701953    9637 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 11:07:18.702057    9637 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 11:07:18.702068    9637 kubeadm.go:310] 
	I0807 11:07:18.702232    9637 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoe6y8.pluqtcpnydqamgb7 \
	I0807 11:07:18.702297    9637 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d \
	I0807 11:07:18.702334    9637 kubeadm.go:310] 	--control-plane 
	I0807 11:07:18.702338    9637 kubeadm.go:310] 
	I0807 11:07:18.702383    9637 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 11:07:18.702387    9637 kubeadm.go:310] 
	I0807 11:07:18.702434    9637 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoe6y8.pluqtcpnydqamgb7 \
	I0807 11:07:18.702492    9637 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:be167f124a8fb636a1678f53a2902f8fa29ed7e7a056c6f6e13484429b709f7d 
	I0807 11:07:18.702576    9637 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 11:07:18.702586    9637 cni.go:84] Creating CNI manager for ""
	I0807 11:07:18.702594    9637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:07:18.705299    9637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 11:07:18.713312    9637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 11:07:18.716924    9637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 11:07:18.722187    9637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 11:07:18.722256    9637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 11:07:18.722273    9637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-423000 minikube.k8s.io/updated_at=2024_08_07T11_07_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=stopped-upgrade-423000 minikube.k8s.io/primary=true
	I0807 11:07:18.773118    9637 ops.go:34] apiserver oom_adj: -16
	I0807 11:07:18.773228    9637 kubeadm.go:1113] duration metric: took 51.035166ms to wait for elevateKubeSystemPrivileges
	I0807 11:07:18.773273    9637 kubeadm.go:394] duration metric: took 4m11.546048291s to StartCluster
	I0807 11:07:18.773299    9637 settings.go:142] acquiring lock: {Name:mk55ff1d0ed65f587ff79ec8ce8fd4d10e83296d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:07:18.773389    9637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:07:18.773815    9637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/kubeconfig: {Name:mkee6d4905f7dc60ed0b5cf9ef87de0f637b0682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:07:18.774006    9637 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:07:18.774048    9637 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 11:07:18.774115    9637 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-423000"
	I0807 11:07:18.774129    9637 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-423000"
	W0807 11:07:18.774132    9637 addons.go:243] addon storage-provisioner should already be in state true
	I0807 11:07:18.774144    9637 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0807 11:07:18.774133    9637 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-423000"
	I0807 11:07:18.774199    9637 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:07:18.774208    9637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-423000"
	I0807 11:07:18.775426    9637 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19389-6671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f73f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 11:07:18.775539    9637 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-423000"
	W0807 11:07:18.775545    9637 addons.go:243] addon default-storageclass should already be in state true
	I0807 11:07:18.775551    9637 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0807 11:07:18.778244    9637 out.go:177] * Verifying Kubernetes components...
	I0807 11:07:18.778585    9637 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 11:07:18.782382    9637 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 11:07:18.782413    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:07:18.786164    9637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 11:07:18.789225    9637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 11:07:18.793242    9637 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:07:18.793248    9637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 11:07:18.793254    9637 sshutil.go:53] new ssh client: &{IP:localhost Port:51441 SSHKeyPath:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0807 11:07:18.891044    9637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 11:07:18.896051    9637 api_server.go:52] waiting for apiserver process to appear ...
	I0807 11:07:18.896091    9637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 11:07:18.899645    9637 api_server.go:72] duration metric: took 125.63725ms to wait for apiserver process to appear ...
	I0807 11:07:18.899653    9637 api_server.go:88] waiting for apiserver healthz status ...
	I0807 11:07:18.899659    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:18.957076    9637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 11:07:18.970026    9637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 11:07:23.901496    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:23.901570    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:28.902082    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:28.902102    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:33.902669    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:33.902694    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:38.903168    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:38.903189    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:43.903993    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:43.904030    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:48.905218    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:48.905260    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0807 11:07:49.325195    9637 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0807 11:07:49.332366    9637 out.go:177] * Enabled addons: storage-provisioner
	I0807 11:07:49.340311    9637 addons.go:510] duration metric: took 30.56758975s for enable addons: enabled=[storage-provisioner]
	I0807 11:07:53.906569    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:53.906608    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:07:58.908070    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:07:58.908109    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:03.910272    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:03.910300    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:08.912397    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:08.912421    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:13.914193    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:13.914216    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:18.916310    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:18.916449    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:18.939126    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:18.939208    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:18.954670    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:18.954751    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:18.973086    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:18.973157    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:18.985475    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:18.985547    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:18.996092    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:18.996162    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:19.006342    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:19.006411    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:19.016836    9637 logs.go:276] 0 containers: []
	W0807 11:08:19.016847    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:19.016903    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:19.027625    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:19.027643    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:19.027649    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:19.032024    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:19.032031    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:19.046669    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:19.046683    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:19.058545    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:19.058560    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:19.070540    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:19.070554    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:19.094248    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:19.094260    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:19.112820    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:19.112834    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:19.124579    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:19.124591    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:08:19.136532    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:19.136547    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:19.174821    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:19.174833    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:19.209095    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:19.209112    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:19.223497    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:19.223507    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:19.234822    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:19.234835    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:21.750688    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:26.753171    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:26.753672    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:26.788339    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:26.788465    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:26.810020    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:26.810122    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:26.824973    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:26.825042    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:26.836131    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:26.836191    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:26.850811    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:26.850889    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:26.865071    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:26.865137    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:26.875457    9637 logs.go:276] 0 containers: []
	W0807 11:08:26.875469    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:26.875529    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:26.887883    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:26.887897    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:26.887902    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:26.901805    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:26.901819    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:26.913512    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:26.913523    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:26.925022    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:26.925036    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:08:26.937052    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:26.937060    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:26.973948    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:26.973958    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:27.009418    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:27.009429    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:27.024087    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:27.024099    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:27.041170    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:27.041181    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:27.053032    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:27.053045    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:27.078648    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:27.078655    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:27.082502    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:27.082511    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:27.103928    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:27.103940    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:29.617490    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:34.620188    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:34.620483    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:34.650258    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:34.650381    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:34.676185    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:34.676263    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:34.689309    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:34.689374    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:34.700289    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:34.700351    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:34.710410    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:34.710474    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:34.720655    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:34.720714    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:34.731310    9637 logs.go:276] 0 containers: []
	W0807 11:08:34.731321    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:34.731378    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:34.741402    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:34.741417    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:34.741421    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:34.756912    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:34.756925    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:34.780501    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:34.780508    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:34.784936    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:34.784945    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:34.799107    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:34.799120    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:34.810344    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:34.810357    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:34.821887    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:34.821901    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:34.838838    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:34.838850    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:34.850446    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:34.850460    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:34.866979    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:34.866991    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:08:34.878354    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:34.878364    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:34.913913    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:34.913921    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:34.956741    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:34.956752    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:37.473460    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:42.476312    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:42.476579    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:42.510690    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:42.510810    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:42.531229    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:42.531320    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:42.545198    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:42.545270    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:42.557159    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:42.557224    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:42.572215    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:42.572279    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:42.583567    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:42.583638    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:42.593917    9637 logs.go:276] 0 containers: []
	W0807 11:08:42.593929    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:42.593985    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:42.604663    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:42.604678    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:42.604684    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:42.628265    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:42.628273    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:08:42.639724    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:42.639733    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:42.643836    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:42.643842    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:42.658210    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:42.658222    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:42.669755    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:42.669767    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:42.687184    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:42.687197    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:42.699105    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:42.699116    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:42.714164    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:42.714176    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:42.752065    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:42.752072    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:42.787089    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:42.787101    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:42.801019    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:42.801031    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:42.812649    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:42.812661    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:45.329671    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:50.332143    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:50.332548    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:50.367054    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:50.367189    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:50.389249    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:50.389352    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:50.404067    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:50.404125    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:50.416712    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:50.416767    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:50.431333    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:50.431398    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:50.441840    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:50.441909    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:50.453507    9637 logs.go:276] 0 containers: []
	W0807 11:08:50.453519    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:50.453572    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:50.463661    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:50.463678    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:50.463683    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:08:50.475620    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:50.475629    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:50.480491    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:50.480501    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:50.494616    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:50.494629    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:50.506230    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:50.506243    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:50.517785    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:50.517799    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:50.535427    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:50.535437    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:50.547028    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:50.547039    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:50.570198    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:50.570204    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:50.608006    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:50.608015    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:50.642716    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:50.642730    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:50.657096    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:50.657108    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:50.669252    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:50.669263    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:53.185899    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:08:58.188344    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:08:58.188644    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:08:58.234642    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:08:58.234765    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:08:58.253321    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:08:58.253402    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:08:58.276598    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:08:58.276672    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:08:58.288408    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:08:58.288474    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:08:58.299415    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:08:58.299489    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:08:58.313893    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:08:58.313955    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:08:58.324318    9637 logs.go:276] 0 containers: []
	W0807 11:08:58.324327    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:08:58.324374    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:08:58.334761    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:08:58.334775    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:08:58.334780    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:08:58.347044    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:08:58.347056    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:08:58.365520    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:08:58.365530    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:08:58.385613    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:08:58.385623    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:08:58.404042    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:08:58.404055    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:08:58.408720    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:08:58.408726    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:08:58.443494    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:08:58.443507    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:08:58.461930    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:08:58.461941    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:08:58.473483    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:08:58.473496    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:08:58.488951    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:08:58.488963    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:08:58.501610    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:08:58.501624    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:08:58.526451    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:08:58.526459    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:08:58.564608    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:08:58.564620    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:01.078539    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:06.081303    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:06.081717    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:06.119995    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:06.120126    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:06.140221    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:06.140313    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:06.154871    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:09:06.154947    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:06.167582    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:06.167644    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:06.178819    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:06.178900    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:06.189892    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:06.189959    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:06.200394    9637 logs.go:276] 0 containers: []
	W0807 11:09:06.200404    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:06.200455    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:06.210933    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:06.210948    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:06.210953    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:06.247130    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:06.247140    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:06.282874    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:06.282886    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:06.297364    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:06.297376    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:06.315608    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:06.315622    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:06.333330    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:06.333343    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:06.346993    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:06.347010    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:06.358770    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:06.358785    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:06.363008    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:06.363015    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:06.377657    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:06.377667    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:06.393703    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:06.393716    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:06.405324    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:06.405335    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:06.416987    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:06.416996    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:08.941215    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:13.943558    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:13.943787    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:13.972745    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:13.972867    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:13.991070    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:13.991152    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:14.004958    9637 logs.go:276] 2 containers: [7ed58e24ac9f e689c0fe723b]
	I0807 11:09:14.005024    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:14.016195    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:14.016263    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:14.026997    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:14.027056    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:14.037242    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:14.037304    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:14.047304    9637 logs.go:276] 0 containers: []
	W0807 11:09:14.047314    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:14.047369    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:14.060257    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:14.060269    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:14.060274    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:14.071417    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:14.071426    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:14.086289    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:14.086298    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:14.097696    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:14.097709    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:14.109020    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:14.109033    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:14.145612    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:14.145620    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:14.164405    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:14.164414    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:14.178396    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:14.178407    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:14.195596    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:14.195605    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:14.220269    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:14.220281    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:14.230479    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:14.230492    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:14.267249    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:14.267261    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:14.285471    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:14.285479    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:16.801302    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:21.803603    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:21.803846    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:21.832727    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:21.832840    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:21.849662    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:21.849743    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:21.863249    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:09:21.863323    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:21.878042    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:21.878110    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:21.888256    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:21.888317    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:21.898584    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:21.898647    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:21.911611    9637 logs.go:276] 0 containers: []
	W0807 11:09:21.911622    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:21.911676    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:21.921794    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:21.921811    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:21.921816    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:21.957126    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:21.957140    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:21.971714    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:21.971725    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:21.987081    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:21.987090    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:22.000590    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:22.000604    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:22.005471    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:09:22.005479    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:09:22.016524    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:22.016534    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:22.032456    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:22.032470    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:22.049945    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:22.049956    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:22.061236    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:22.061246    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:22.097665    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:22.097674    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:22.114087    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:09:22.114099    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:09:22.126373    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:22.126383    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:22.137519    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:22.137532    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:22.161164    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:22.161173    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:24.674731    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:29.677314    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:29.677785    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:29.717126    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:29.717248    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:29.738918    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:29.739011    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:29.755143    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:09:29.755220    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:29.767623    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:29.767683    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:29.778421    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:29.778482    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:29.788856    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:29.788932    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:29.799130    9637 logs.go:276] 0 containers: []
	W0807 11:09:29.799140    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:29.799195    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:29.814814    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:29.814832    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:29.814839    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:29.849087    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:09:29.849099    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:09:29.861138    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:29.861149    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:29.876222    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:29.876232    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:29.888189    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:29.888199    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:29.906175    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:29.906184    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:29.910235    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:29.910243    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:29.924154    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:29.924166    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:29.936115    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:29.936126    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:29.962048    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:29.962059    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:29.999535    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:29.999544    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:30.013716    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:09:30.013726    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:09:30.024476    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:30.024490    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:30.036388    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:30.036401    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:30.047759    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:30.047769    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:32.561502    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:37.562708    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:37.563085    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:37.595174    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:37.595299    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:37.614161    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:37.614248    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:37.628784    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:09:37.628855    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:37.644656    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:37.644717    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:37.655636    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:37.655695    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:37.665905    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:37.665961    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:37.676234    9637 logs.go:276] 0 containers: []
	W0807 11:09:37.676243    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:37.676291    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:37.690577    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:37.690603    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:37.690608    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:37.708372    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:37.708381    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:37.719897    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:37.719910    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:37.731788    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:37.731803    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:37.766611    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:37.766624    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:37.778949    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:37.778963    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:37.792741    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:37.792752    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:37.831056    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:37.831065    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:37.835310    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:37.835318    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:37.849620    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:37.849632    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:37.861208    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:37.861221    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:37.875908    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:37.875919    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:37.890840    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:37.890853    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:37.915630    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:09:37.915640    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:09:37.926918    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:09:37.926930    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:09:40.440562    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:45.442694    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:45.442885    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:45.465190    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:45.465304    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:45.480541    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:45.480613    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:45.495578    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:09:45.495644    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:45.506016    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:45.506081    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:45.516546    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:45.516604    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:45.526740    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:45.526808    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:45.540952    9637 logs.go:276] 0 containers: []
	W0807 11:09:45.540964    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:45.541017    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:45.551687    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:45.551702    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:45.551708    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:45.563681    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:45.563695    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:45.578017    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:45.578030    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:45.589207    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:45.589220    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:45.623760    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:45.623774    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:45.637454    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:45.637466    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:45.649158    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:45.649169    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:45.660588    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:45.660600    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:45.698745    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:45.698753    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:45.702612    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:09:45.702621    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:09:45.726271    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:45.726282    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:45.746797    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:45.746806    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:45.762850    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:45.762863    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:45.787775    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:45.787785    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:45.802537    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:09:45.802549    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:09:48.316106    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:09:53.318783    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:09:53.319191    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:09:53.352742    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:09:53.352892    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:09:53.374181    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:09:53.374299    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:09:53.388946    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:09:53.389026    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:09:53.401297    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:09:53.401362    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:09:53.413143    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:09:53.413202    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:09:53.423808    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:09:53.423871    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:09:53.434494    9637 logs.go:276] 0 containers: []
	W0807 11:09:53.434504    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:09:53.434552    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:09:53.444932    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:09:53.444948    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:09:53.444953    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:09:53.449690    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:09:53.449699    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:09:53.463346    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:09:53.463358    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:09:53.474692    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:09:53.474704    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:09:53.488573    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:09:53.488585    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:09:53.500242    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:09:53.500259    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:09:53.535178    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:09:53.535200    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:09:53.549314    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:09:53.549322    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:09:53.560929    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:09:53.560955    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:09:53.571696    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:09:53.571710    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:09:53.583025    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:09:53.583038    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:09:53.601430    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:09:53.601439    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:09:53.626454    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:09:53.626460    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:09:53.637829    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:09:53.637839    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:09:53.675609    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:09:53.675616    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:09:56.187190    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:01.189386    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:01.189782    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:01.221489    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:01.221607    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:01.241022    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:01.241104    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:01.255574    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:01.255641    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:01.267524    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:01.267592    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:01.278059    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:01.278124    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:01.288595    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:01.288654    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:01.298800    9637 logs.go:276] 0 containers: []
	W0807 11:10:01.298811    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:01.298861    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:01.309183    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:01.309201    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:01.309206    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:01.321174    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:01.321183    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:01.335747    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:01.335757    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:01.347189    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:01.347199    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:01.370787    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:01.370795    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:01.374828    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:01.374833    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:01.410533    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:01.410547    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:01.424375    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:01.424388    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:01.436792    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:01.436804    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:01.448834    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:01.448847    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:01.486663    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:01.486672    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:01.498461    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:01.498474    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:01.511631    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:01.511642    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:01.529422    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:01.529434    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:01.544223    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:01.544236    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:04.058141    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:09.060772    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:09.060847    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:09.073784    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:09.073846    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:09.085935    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:09.086007    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:09.097540    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:09.097595    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:09.109253    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:09.109301    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:09.120478    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:09.120543    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:09.132273    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:09.132348    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:09.143879    9637 logs.go:276] 0 containers: []
	W0807 11:10:09.143891    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:09.143935    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:09.159848    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:09.159862    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:09.159867    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:09.175321    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:09.175331    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:09.189490    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:09.189499    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:09.204402    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:09.204412    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:09.222272    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:09.222284    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:09.260615    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:09.260630    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:09.276311    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:09.276319    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:09.288491    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:09.288500    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:09.300313    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:09.300320    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:09.322423    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:09.322431    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:09.340897    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:09.340907    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:09.366632    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:09.366648    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:09.406817    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:09.406864    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:09.421013    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:09.421026    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:09.425919    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:09.425932    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:11.940810    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:16.943397    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:16.943447    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:16.956220    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:16.956276    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:16.966872    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:16.966946    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:16.982527    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:16.982595    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:16.992482    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:16.992546    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:17.005296    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:17.005365    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:17.044807    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:17.044867    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:17.060286    9637 logs.go:276] 0 containers: []
	W0807 11:10:17.060299    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:17.060357    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:17.070517    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:17.070533    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:17.070539    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:17.105459    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:17.105470    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:17.117045    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:17.117057    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:17.129157    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:17.129168    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:17.140933    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:17.140943    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:17.179238    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:17.179247    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:17.204347    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:17.204356    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:17.216215    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:17.216228    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:17.231038    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:17.231048    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:17.245337    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:17.245348    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:17.259057    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:17.259067    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:17.270997    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:17.271006    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:17.289266    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:17.289277    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:17.293555    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:17.293562    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:17.308976    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:17.308988    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:19.822390    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:24.824587    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:24.824996    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:24.865456    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:24.865606    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:24.887076    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:24.887170    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:24.903083    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:24.903152    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:24.915958    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:24.916025    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:24.926910    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:24.926963    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:24.938115    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:24.938182    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:24.948292    9637 logs.go:276] 0 containers: []
	W0807 11:10:24.948308    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:24.948355    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:24.960712    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:24.960727    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:24.960732    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:24.975972    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:24.975983    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:24.988562    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:24.988574    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:25.013905    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:25.013912    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:25.017828    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:25.017837    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:25.052384    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:25.052396    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:25.069130    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:25.069145    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:25.089719    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:25.089731    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:25.101379    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:25.101391    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:25.115106    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:25.115119    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:25.126844    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:25.126855    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:25.141716    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:25.141728    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:25.159547    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:25.159557    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:25.171003    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:25.171015    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:25.207988    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:25.207997    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:27.725511    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:32.728212    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:32.728288    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:32.739557    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:32.739613    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:32.750017    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:32.750073    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:32.761532    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:32.761599    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:32.775732    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:32.775794    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:32.787132    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:32.787189    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:32.798734    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:32.798784    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:32.809801    9637 logs.go:276] 0 containers: []
	W0807 11:10:32.809813    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:32.809873    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:32.821083    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:32.821099    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:32.821104    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:32.859569    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:32.859581    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:32.873910    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:32.873921    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:32.886617    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:32.886627    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:32.903103    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:32.903112    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:32.917017    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:32.917029    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:32.930794    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:32.930804    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:32.957146    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:32.957164    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:33.001436    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:33.001450    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:33.014264    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:33.014276    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:33.027367    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:33.027377    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:33.045619    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:33.045639    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:33.050175    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:33.050186    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:33.063520    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:33.063533    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:33.078900    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:33.078911    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:35.593761    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:40.596583    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:40.596949    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:40.626337    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:40.626453    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:40.644889    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:40.644984    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:40.658758    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:40.658836    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:40.670477    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:40.670545    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:40.680983    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:40.681034    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:40.692306    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:40.692358    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:40.704279    9637 logs.go:276] 0 containers: []
	W0807 11:10:40.704292    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:40.704355    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:40.716669    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:40.716688    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:40.716694    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:40.742262    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:40.742277    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:40.758349    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:40.758367    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:40.771567    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:40.771578    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:40.791913    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:40.791926    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:40.831424    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:40.831440    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:40.847617    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:40.847629    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:40.864129    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:40.864139    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:40.878874    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:40.878886    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:40.890540    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:40.890551    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:40.902479    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:40.902489    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:40.906723    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:40.906729    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:40.918877    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:40.918887    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:40.930786    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:40.930797    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:40.969472    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:40.969481    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:43.483146    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:48.484047    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:48.484277    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:48.511697    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:48.511823    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:48.529496    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:48.529577    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:48.543462    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:48.543533    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:48.554740    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:48.554801    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:48.565236    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:48.565296    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:48.575342    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:48.575412    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:48.587165    9637 logs.go:276] 0 containers: []
	W0807 11:10:48.587179    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:48.587235    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:48.597517    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:48.597538    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:48.597544    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:48.601859    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:48.601867    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:48.613837    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:48.613849    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:48.628029    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:48.628040    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:48.639740    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:48.639751    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:48.664886    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:48.664893    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:48.676467    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:48.676478    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:48.694547    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:48.694559    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:48.732314    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:48.732322    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:48.800203    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:48.800217    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:48.813981    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:48.813991    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:48.825235    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:48.825250    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:48.837126    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:48.837140    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:48.856225    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:48.856238    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:48.867703    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:48.867715    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:51.381390    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:10:56.383734    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:10:56.383841    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:10:56.395689    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:10:56.395742    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:10:56.407129    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:10:56.407190    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:10:56.426834    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:10:56.426879    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:10:56.437575    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:10:56.437633    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:10:56.448791    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:10:56.448853    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:10:56.461578    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:10:56.461632    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:10:56.472223    9637 logs.go:276] 0 containers: []
	W0807 11:10:56.472233    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:10:56.472275    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:10:56.483585    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:10:56.483601    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:10:56.483605    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:10:56.499049    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:10:56.499066    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:10:56.514899    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:10:56.514906    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:10:56.527286    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:10:56.527297    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:10:56.543698    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:10:56.543709    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:10:56.548720    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:10:56.548732    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:10:56.589083    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:10:56.589096    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:10:56.602167    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:10:56.602178    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:10:56.614893    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:10:56.614902    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:10:56.631250    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:10:56.631266    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:10:56.646829    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:10:56.646841    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:10:56.670791    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:10:56.670805    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:10:56.709399    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:10:56.709413    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:10:56.721832    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:10:56.721840    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:10:56.734210    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:10:56.734225    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:10:59.255206    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:11:04.257977    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:11:04.258242    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:11:04.284456    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:11:04.284567    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:11:04.301966    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:11:04.302050    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:11:04.315265    9637 logs.go:276] 4 containers: [ab1a697e4c71 8833d45a23c3 7ed58e24ac9f e689c0fe723b]
	I0807 11:11:04.315338    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:11:04.327236    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:11:04.327307    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:11:04.338155    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:11:04.338221    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:11:04.348494    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:11:04.348558    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:11:04.358753    9637 logs.go:276] 0 containers: []
	W0807 11:11:04.358764    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:11:04.358819    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:11:04.369038    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:11:04.369055    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:11:04.369064    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:11:04.382449    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:11:04.382459    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:11:04.394373    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:11:04.394383    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:11:04.430054    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:11:04.430065    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:11:04.434317    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:11:04.434325    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:11:04.470602    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:11:04.470612    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:11:04.482517    9637 logs.go:123] Gathering logs for coredns [7ed58e24ac9f] ...
	I0807 11:11:04.482527    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ed58e24ac9f"
	I0807 11:11:04.493600    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:11:04.493613    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:11:04.504871    9637 logs.go:123] Gathering logs for coredns [e689c0fe723b] ...
	I0807 11:11:04.504880    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e689c0fe723b"
	I0807 11:11:04.516380    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:11:04.516393    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:11:04.539765    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:11:04.539773    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:11:04.553637    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:11:04.553650    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:11:04.567982    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:11:04.567992    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:11:04.592222    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:11:04.592234    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:11:04.607029    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:11:04.607039    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:11:07.119427    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:11:12.121727    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:11:12.122129    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0807 11:11:12.167403    9637 logs.go:276] 1 containers: [5d8a874706c7]
	I0807 11:11:12.167517    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0807 11:11:12.187696    9637 logs.go:276] 1 containers: [d8f8f677a139]
	I0807 11:11:12.187790    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0807 11:11:12.202314    9637 logs.go:276] 4 containers: [1fa70897a6b1 a87e6c228539 ab1a697e4c71 8833d45a23c3]
	I0807 11:11:12.202389    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0807 11:11:12.214308    9637 logs.go:276] 1 containers: [b2e77d663d57]
	I0807 11:11:12.214369    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0807 11:11:12.224897    9637 logs.go:276] 1 containers: [778c3f206d0d]
	I0807 11:11:12.224962    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0807 11:11:12.235678    9637 logs.go:276] 1 containers: [56aad39d4643]
	I0807 11:11:12.235734    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0807 11:11:12.246548    9637 logs.go:276] 0 containers: []
	W0807 11:11:12.246559    9637 logs.go:278] No container was found matching "kindnet"
	I0807 11:11:12.246607    9637 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0807 11:11:12.258932    9637 logs.go:276] 1 containers: [0393cf2c5532]
	I0807 11:11:12.258945    9637 logs.go:123] Gathering logs for kube-proxy [778c3f206d0d] ...
	I0807 11:11:12.258950    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 778c3f206d0d"
	I0807 11:11:12.271052    9637 logs.go:123] Gathering logs for kube-controller-manager [56aad39d4643] ...
	I0807 11:11:12.271065    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aad39d4643"
	I0807 11:11:12.288715    9637 logs.go:123] Gathering logs for kubelet ...
	I0807 11:11:12.288727    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 11:11:12.326095    9637 logs.go:123] Gathering logs for coredns [a87e6c228539] ...
	I0807 11:11:12.326102    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87e6c228539"
	I0807 11:11:12.337929    9637 logs.go:123] Gathering logs for dmesg ...
	I0807 11:11:12.337941    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 11:11:12.342167    9637 logs.go:123] Gathering logs for describe nodes ...
	I0807 11:11:12.342177    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 11:11:12.378047    9637 logs.go:123] Gathering logs for coredns [ab1a697e4c71] ...
	I0807 11:11:12.378060    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab1a697e4c71"
	I0807 11:11:12.390027    9637 logs.go:123] Gathering logs for container status ...
	I0807 11:11:12.390040    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 11:11:12.401539    9637 logs.go:123] Gathering logs for etcd [d8f8f677a139] ...
	I0807 11:11:12.401551    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8f8f677a139"
	I0807 11:11:12.417139    9637 logs.go:123] Gathering logs for coredns [1fa70897a6b1] ...
	I0807 11:11:12.417152    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa70897a6b1"
	I0807 11:11:12.428906    9637 logs.go:123] Gathering logs for kube-scheduler [b2e77d663d57] ...
	I0807 11:11:12.428917    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e77d663d57"
	I0807 11:11:12.444656    9637 logs.go:123] Gathering logs for storage-provisioner [0393cf2c5532] ...
	I0807 11:11:12.444666    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0393cf2c5532"
	I0807 11:11:12.456512    9637 logs.go:123] Gathering logs for Docker ...
	I0807 11:11:12.456524    9637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0807 11:11:12.479189    9637 logs.go:123] Gathering logs for kube-apiserver [5d8a874706c7] ...
	I0807 11:11:12.479196    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8a874706c7"
	I0807 11:11:12.493379    9637 logs.go:123] Gathering logs for coredns [8833d45a23c3] ...
	I0807 11:11:12.493392    9637 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8833d45a23c3"
	I0807 11:11:15.007831    9637 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0807 11:11:20.008568    9637 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0807 11:11:20.012917    9637 out.go:177] 
	W0807 11:11:20.016769    9637 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0807 11:11:20.016776    9637 out.go:239] * 
	* 
	W0807 11:11:20.017208    9637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:20.032845    9637 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (571.84s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-275000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-275000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.821142084s)

                                                
                                                
-- stdout --
	* [pause-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-275000" primary control-plane node in "pause-275000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-275000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-275000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-275000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-275000 -n pause-275000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-275000 -n pause-275000: exit status 7 (60.780875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-275000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 : exit status 80 (9.886636125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-673000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-673000" primary control-plane node in "NoKubernetes-673000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-673000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-673000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000: exit status 7 (52.827875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-673000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 : exit status 80 (5.263600083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-673000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-673000
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-673000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000: exit status 7 (57.101625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-673000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247514208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-673000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-673000
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-673000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000: exit status 7 (64.982209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-673000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 : exit status 80 (5.262616s)

                                                
                                                
-- stdout --
	* [NoKubernetes-673000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-673000
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-673000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-673000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-673000 -n NoKubernetes-673000: exit status 7 (55.05525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-673000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.9371165s)

                                                
                                                
-- stdout --
	* [auto-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-921000" primary control-plane node in "auto-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:09:32.772876    9861 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:09:32.773003    9861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:32.773006    9861 out.go:304] Setting ErrFile to fd 2...
	I0807 11:09:32.773009    9861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:32.773136    9861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:09:32.774210    9861 out.go:298] Setting JSON to false
	I0807 11:09:32.791909    9861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5941,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:09:32.791986    9861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:09:32.796894    9861 out.go:177] * [auto-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:09:32.804929    9861 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:09:32.804982    9861 notify.go:220] Checking for updates...
	I0807 11:09:32.812891    9861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:09:32.815923    9861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:09:32.818905    9861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:09:32.821910    9861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:09:32.824935    9861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:09:32.828245    9861 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:09:32.828321    9861 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:09:32.828377    9861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:09:32.832833    9861 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:09:32.839791    9861 start.go:297] selected driver: qemu2
	I0807 11:09:32.839797    9861 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:09:32.839802    9861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:09:32.842100    9861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:09:32.844876    9861 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:09:32.848037    9861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:09:32.848060    9861 cni.go:84] Creating CNI manager for ""
	I0807 11:09:32.848068    9861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:09:32.848073    9861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:09:32.848124    9861 start.go:340] cluster config:
	{Name:auto-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:09:32.851938    9861 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:09:32.859869    9861 out.go:177] * Starting "auto-921000" primary control-plane node in "auto-921000" cluster
	I0807 11:09:32.863946    9861 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:09:32.863963    9861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:09:32.863972    9861 cache.go:56] Caching tarball of preloaded images
	I0807 11:09:32.864043    9861 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:09:32.864049    9861 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:09:32.864117    9861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/auto-921000/config.json ...
	I0807 11:09:32.864133    9861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/auto-921000/config.json: {Name:mk5602b30108113c95f5c46ccc572421bc68cf4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:09:32.864531    9861 start.go:360] acquireMachinesLock for auto-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:09:32.864562    9861 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "auto-921000"
	I0807 11:09:32.864572    9861 start.go:93] Provisioning new machine with config: &{Name:auto-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:09:32.864603    9861 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:09:32.872712    9861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:09:32.889804    9861 start.go:159] libmachine.API.Create for "auto-921000" (driver="qemu2")
	I0807 11:09:32.889836    9861 client.go:168] LocalClient.Create starting
	I0807 11:09:32.889907    9861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:09:32.889937    9861 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:32.889945    9861 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:32.889994    9861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:09:32.890016    9861 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:32.890025    9861 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:32.890368    9861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:09:33.044635    9861 main.go:141] libmachine: Creating SSH key...
	I0807 11:09:33.108734    9861 main.go:141] libmachine: Creating Disk image...
	I0807 11:09:33.108742    9861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:09:33.109230    9861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:33.118684    9861 main.go:141] libmachine: STDOUT: 
	I0807 11:09:33.118701    9861 main.go:141] libmachine: STDERR: 
	I0807 11:09:33.118751    9861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2 +20000M
	I0807 11:09:33.126719    9861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:09:33.126733    9861 main.go:141] libmachine: STDERR: 
	I0807 11:09:33.126751    9861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:33.126755    9861 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:09:33.126767    9861 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:09:33.126791    9861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4e:2c:04:ab:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:33.128489    9861 main.go:141] libmachine: STDOUT: 
	I0807 11:09:33.128502    9861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:09:33.128518    9861 client.go:171] duration metric: took 238.681583ms to LocalClient.Create
	I0807 11:09:35.130678    9861 start.go:128] duration metric: took 2.266077459s to createHost
	I0807 11:09:35.130754    9861 start.go:83] releasing machines lock for "auto-921000", held for 2.266213667s
	W0807 11:09:35.130877    9861 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:35.143184    9861 out.go:177] * Deleting "auto-921000" in qemu2 ...
	W0807 11:09:35.170081    9861 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:35.170108    9861 start.go:729] Will try again in 5 seconds ...
	I0807 11:09:40.172339    9861 start.go:360] acquireMachinesLock for auto-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:09:40.172960    9861 start.go:364] duration metric: took 505.875µs to acquireMachinesLock for "auto-921000"
	I0807 11:09:40.173083    9861 start.go:93] Provisioning new machine with config: &{Name:auto-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:09:40.173334    9861 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:09:40.179240    9861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:09:40.223604    9861 start.go:159] libmachine.API.Create for "auto-921000" (driver="qemu2")
	I0807 11:09:40.223667    9861 client.go:168] LocalClient.Create starting
	I0807 11:09:40.223799    9861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:09:40.223858    9861 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:40.223873    9861 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:40.223953    9861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:09:40.223999    9861 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:40.224010    9861 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:40.224559    9861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:09:40.389064    9861 main.go:141] libmachine: Creating SSH key...
	I0807 11:09:40.617538    9861 main.go:141] libmachine: Creating Disk image...
	I0807 11:09:40.617546    9861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:09:40.617784    9861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:40.627664    9861 main.go:141] libmachine: STDOUT: 
	I0807 11:09:40.627684    9861 main.go:141] libmachine: STDERR: 
	I0807 11:09:40.627742    9861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2 +20000M
	I0807 11:09:40.635811    9861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:09:40.635826    9861 main.go:141] libmachine: STDERR: 
	I0807 11:09:40.635835    9861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:40.635838    9861 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:09:40.635850    9861 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:09:40.635896    9861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f8:b4:25:eb:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/auto-921000/disk.qcow2
	I0807 11:09:40.637587    9861 main.go:141] libmachine: STDOUT: 
	I0807 11:09:40.637602    9861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:09:40.637614    9861 client.go:171] duration metric: took 413.948542ms to LocalClient.Create
	I0807 11:09:42.639699    9861 start.go:128] duration metric: took 2.466367166s to createHost
	I0807 11:09:42.639737    9861 start.go:83] releasing machines lock for "auto-921000", held for 2.466790958s
	W0807 11:09:42.639949    9861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:42.654242    9861 out.go:177] 
	W0807 11:09:42.659357    9861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:09:42.659370    9861 out.go:239] * 
	* 
	W0807 11:09:42.660724    9861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:09:42.670285    9861 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.826118958s)

                                                
                                                
-- stdout --
	* [calico-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-921000" primary control-plane node in "calico-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:09:44.876523    9970 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:09:44.876674    9970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:44.876677    9970 out.go:304] Setting ErrFile to fd 2...
	I0807 11:09:44.876680    9970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:44.876812    9970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:09:44.877951    9970 out.go:298] Setting JSON to false
	I0807 11:09:44.894753    9970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5953,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:09:44.894875    9970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:09:44.900369    9970 out.go:177] * [calico-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:09:44.907164    9970 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:09:44.907180    9970 notify.go:220] Checking for updates...
	I0807 11:09:44.914263    9970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:09:44.917216    9970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:09:44.920249    9970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:09:44.923248    9970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:09:44.926238    9970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:09:44.929575    9970 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:09:44.929644    9970 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:09:44.929714    9970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:09:44.933202    9970 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:09:44.940176    9970 start.go:297] selected driver: qemu2
	I0807 11:09:44.940182    9970 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:09:44.940188    9970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:09:44.942408    9970 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:09:44.945333    9970 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:09:44.946604    9970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:09:44.946624    9970 cni.go:84] Creating CNI manager for "calico"
	I0807 11:09:44.946628    9970 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0807 11:09:44.946669    9970 start.go:340] cluster config:
	{Name:calico-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:09:44.950112    9970 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:09:44.957222    9970 out.go:177] * Starting "calico-921000" primary control-plane node in "calico-921000" cluster
	I0807 11:09:44.961187    9970 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:09:44.961203    9970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:09:44.961209    9970 cache.go:56] Caching tarball of preloaded images
	I0807 11:09:44.961266    9970 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:09:44.961271    9970 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:09:44.961327    9970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/calico-921000/config.json ...
	I0807 11:09:44.961339    9970 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/calico-921000/config.json: {Name:mk9747f3a3a25a5ae1e976f7bdc530a8554c13b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:09:44.961561    9970 start.go:360] acquireMachinesLock for calico-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:09:44.961593    9970 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "calico-921000"
	I0807 11:09:44.961603    9970 start.go:93] Provisioning new machine with config: &{Name:calico-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:09:44.961640    9970 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:09:44.970242    9970 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:09:44.986014    9970 start.go:159] libmachine.API.Create for "calico-921000" (driver="qemu2")
	I0807 11:09:44.986047    9970 client.go:168] LocalClient.Create starting
	I0807 11:09:44.986118    9970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:09:44.986149    9970 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:44.986159    9970 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:44.986196    9970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:09:44.986219    9970 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:44.986227    9970 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:44.986635    9970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:09:45.142888    9970 main.go:141] libmachine: Creating SSH key...
	I0807 11:09:45.222470    9970 main.go:141] libmachine: Creating Disk image...
	I0807 11:09:45.222476    9970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:09:45.222696    9970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:45.232085    9970 main.go:141] libmachine: STDOUT: 
	I0807 11:09:45.232113    9970 main.go:141] libmachine: STDERR: 
	I0807 11:09:45.232160    9970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2 +20000M
	I0807 11:09:45.239955    9970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:09:45.239974    9970 main.go:141] libmachine: STDERR: 
	I0807 11:09:45.239992    9970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:45.239997    9970 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:09:45.240005    9970 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:09:45.240033    9970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:cb:07:c5:bc:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:45.241663    9970 main.go:141] libmachine: STDOUT: 
	I0807 11:09:45.241677    9970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:09:45.241705    9970 client.go:171] duration metric: took 255.657ms to LocalClient.Create
	I0807 11:09:47.243809    9970 start.go:128] duration metric: took 2.282184375s to createHost
	I0807 11:09:47.243852    9970 start.go:83] releasing machines lock for "calico-921000", held for 2.282285375s
	W0807 11:09:47.243910    9970 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:47.258186    9970 out.go:177] * Deleting "calico-921000" in qemu2 ...
	W0807 11:09:47.278113    9970 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:47.278127    9970 start.go:729] Will try again in 5 seconds ...
	I0807 11:09:52.279289    9970 start.go:360] acquireMachinesLock for calico-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:09:52.279919    9970 start.go:364] duration metric: took 493.208µs to acquireMachinesLock for "calico-921000"
	I0807 11:09:52.280090    9970 start.go:93] Provisioning new machine with config: &{Name:calico-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:09:52.280376    9970 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:09:52.286087    9970 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:09:52.337593    9970 start.go:159] libmachine.API.Create for "calico-921000" (driver="qemu2")
	I0807 11:09:52.337645    9970 client.go:168] LocalClient.Create starting
	I0807 11:09:52.337779    9970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:09:52.337861    9970 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:52.337879    9970 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:52.337941    9970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:09:52.337989    9970 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:52.338007    9970 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:52.338552    9970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:09:52.502378    9970 main.go:141] libmachine: Creating SSH key...
	I0807 11:09:52.614645    9970 main.go:141] libmachine: Creating Disk image...
	I0807 11:09:52.614653    9970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:09:52.614890    9970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:52.624438    9970 main.go:141] libmachine: STDOUT: 
	I0807 11:09:52.624461    9970 main.go:141] libmachine: STDERR: 
	I0807 11:09:52.624525    9970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2 +20000M
	I0807 11:09:52.632425    9970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:09:52.632442    9970 main.go:141] libmachine: STDERR: 
	I0807 11:09:52.632456    9970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:52.632462    9970 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:09:52.632475    9970 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:09:52.632512    9970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:99:5a:6d:67:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/calico-921000/disk.qcow2
	I0807 11:09:52.634123    9970 main.go:141] libmachine: STDOUT: 
	I0807 11:09:52.634139    9970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:09:52.634151    9970 client.go:171] duration metric: took 296.502666ms to LocalClient.Create
	I0807 11:09:54.635954    9970 start.go:128] duration metric: took 2.35556975s to createHost
	I0807 11:09:54.636014    9970 start.go:83] releasing machines lock for "calico-921000", held for 2.356080458s
	W0807 11:09:54.636319    9970 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:54.647883    9970 out.go:177] 
	W0807 11:09:54.651033    9970 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:09:54.651060    9970 out.go:239] * 
	* 
	W0807 11:09:54.652727    9970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:09:54.661830    9970 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.802940542s)

                                                
                                                
-- stdout --
	* [custom-flannel-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-921000" primary control-plane node in "custom-flannel-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:09:57.071208   10090 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:09:57.071330   10090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:57.071333   10090 out.go:304] Setting ErrFile to fd 2...
	I0807 11:09:57.071336   10090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:09:57.071463   10090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:09:57.072527   10090 out.go:298] Setting JSON to false
	I0807 11:09:57.088855   10090 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5966,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:09:57.088921   10090 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:09:57.094069   10090 out.go:177] * [custom-flannel-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:09:57.101929   10090 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:09:57.101981   10090 notify.go:220] Checking for updates...
	I0807 11:09:57.108960   10090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:09:57.112027   10090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:09:57.115013   10090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:09:57.117994   10090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:09:57.120982   10090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:09:57.124335   10090 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:09:57.124399   10090 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:09:57.124449   10090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:09:57.129018   10090 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:09:57.135926   10090 start.go:297] selected driver: qemu2
	I0807 11:09:57.135932   10090 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:09:57.135937   10090 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:09:57.138059   10090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:09:57.140996   10090 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:09:57.143914   10090 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:09:57.143928   10090 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0807 11:09:57.143938   10090 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0807 11:09:57.143966   10090 start.go:340] cluster config:
	{Name:custom-flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:09:57.147380   10090 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:09:57.154949   10090 out.go:177] * Starting "custom-flannel-921000" primary control-plane node in "custom-flannel-921000" cluster
	I0807 11:09:57.158919   10090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:09:57.158933   10090 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:09:57.158939   10090 cache.go:56] Caching tarball of preloaded images
	I0807 11:09:57.158990   10090 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:09:57.158994   10090 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:09:57.159039   10090 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/custom-flannel-921000/config.json ...
	I0807 11:09:57.159051   10090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/custom-flannel-921000/config.json: {Name:mkc69b2bda7e475871bbf51687f07c7a1f9e5533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:09:57.159265   10090 start.go:360] acquireMachinesLock for custom-flannel-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:09:57.159297   10090 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "custom-flannel-921000"
	I0807 11:09:57.159307   10090 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:09:57.159339   10090 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:09:57.167853   10090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:09:57.183555   10090 start.go:159] libmachine.API.Create for "custom-flannel-921000" (driver="qemu2")
	I0807 11:09:57.183589   10090 client.go:168] LocalClient.Create starting
	I0807 11:09:57.183650   10090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:09:57.183688   10090 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:57.183698   10090 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:57.183737   10090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:09:57.183760   10090 main.go:141] libmachine: Decoding PEM data...
	I0807 11:09:57.183766   10090 main.go:141] libmachine: Parsing certificate...
	I0807 11:09:57.184155   10090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:09:57.340520   10090 main.go:141] libmachine: Creating SSH key...
	I0807 11:09:57.446512   10090 main.go:141] libmachine: Creating Disk image...
	I0807 11:09:57.446519   10090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:09:57.446722   10090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:09:57.456071   10090 main.go:141] libmachine: STDOUT: 
	I0807 11:09:57.456090   10090 main.go:141] libmachine: STDERR: 
	I0807 11:09:57.456150   10090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2 +20000M
	I0807 11:09:57.464172   10090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:09:57.464186   10090 main.go:141] libmachine: STDERR: 
	I0807 11:09:57.464202   10090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:09:57.464205   10090 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:09:57.464228   10090 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:09:57.464257   10090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:1c:a0:1b:8b:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:09:57.465932   10090 main.go:141] libmachine: STDOUT: 
	I0807 11:09:57.465947   10090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:09:57.465964   10090 client.go:171] duration metric: took 282.37525ms to LocalClient.Create
	I0807 11:09:59.468202   10090 start.go:128] duration metric: took 2.308862542s to createHost
	I0807 11:09:59.468295   10090 start.go:83] releasing machines lock for "custom-flannel-921000", held for 2.3090215s
	W0807 11:09:59.468354   10090 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:59.475698   10090 out.go:177] * Deleting "custom-flannel-921000" in qemu2 ...
	W0807 11:09:59.507179   10090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:09:59.507213   10090 start.go:729] Will try again in 5 seconds ...
	I0807 11:10:04.509222   10090 start.go:360] acquireMachinesLock for custom-flannel-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:04.509786   10090 start.go:364] duration metric: took 434.208µs to acquireMachinesLock for "custom-flannel-921000"
	I0807 11:10:04.509880   10090 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:04.510162   10090 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:04.518856   10090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:04.569065   10090 start.go:159] libmachine.API.Create for "custom-flannel-921000" (driver="qemu2")
	I0807 11:10:04.569124   10090 client.go:168] LocalClient.Create starting
	I0807 11:10:04.569250   10090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:04.569311   10090 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:04.569331   10090 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:04.569402   10090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:04.569453   10090 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:04.569466   10090 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:04.570173   10090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:04.733379   10090 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:04.777193   10090 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:04.777202   10090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:04.777395   10090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:10:04.786938   10090 main.go:141] libmachine: STDOUT: 
	I0807 11:10:04.786955   10090 main.go:141] libmachine: STDERR: 
	I0807 11:10:04.787015   10090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2 +20000M
	I0807 11:10:04.795099   10090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:04.795114   10090 main.go:141] libmachine: STDERR: 
	I0807 11:10:04.795127   10090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:10:04.795132   10090 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:04.795150   10090 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:04.795191   10090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b3:64:2e:62:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/custom-flannel-921000/disk.qcow2
	I0807 11:10:04.796934   10090 main.go:141] libmachine: STDOUT: 
	I0807 11:10:04.796953   10090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:04.796969   10090 client.go:171] duration metric: took 227.84175ms to LocalClient.Create
	I0807 11:10:06.799145   10090 start.go:128] duration metric: took 2.2889775s to createHost
	I0807 11:10:06.799252   10090 start.go:83] releasing machines lock for "custom-flannel-921000", held for 2.289451583s
	W0807 11:10:06.799613   10090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:06.815146   10090 out.go:177] 
	W0807 11:10:06.819020   10090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:10:06.819053   10090 out.go:239] * 
	* 
	W0807 11:10:06.821747   10090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:10:06.834076   10090 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.781688625s)

                                                
                                                
-- stdout --
	* [false-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-921000" primary control-plane node in "false-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:10:09.249874   10207 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:10:09.250025   10207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:09.250031   10207 out.go:304] Setting ErrFile to fd 2...
	I0807 11:10:09.250034   10207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:09.250175   10207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:10:09.251539   10207 out.go:298] Setting JSON to false
	I0807 11:10:09.270260   10207 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5978,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:10:09.270348   10207 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:10:09.274191   10207 out.go:177] * [false-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:10:09.282317   10207 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:10:09.282436   10207 notify.go:220] Checking for updates...
	I0807 11:10:09.290264   10207 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:10:09.293317   10207 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:10:09.296299   10207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:10:09.299276   10207 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:10:09.302331   10207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:10:09.305617   10207 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:10:09.305686   10207 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:10:09.305743   10207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:10:09.309247   10207 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:10:09.315174   10207 start.go:297] selected driver: qemu2
	I0807 11:10:09.315182   10207 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:10:09.315188   10207 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:10:09.317557   10207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:10:09.320222   10207 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:10:09.323355   10207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:10:09.323388   10207 cni.go:84] Creating CNI manager for "false"
	I0807 11:10:09.323426   10207 start.go:340] cluster config:
	{Name:false-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:10:09.327472   10207 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:10:09.330254   10207 out.go:177] * Starting "false-921000" primary control-plane node in "false-921000" cluster
	I0807 11:10:09.338267   10207 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:10:09.338295   10207 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:10:09.338307   10207 cache.go:56] Caching tarball of preloaded images
	I0807 11:10:09.338380   10207 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:10:09.338387   10207 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:10:09.338437   10207 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/false-921000/config.json ...
	I0807 11:10:09.338449   10207 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/false-921000/config.json: {Name:mk907713b88d5a581cac1b096b805464a8c7e3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:10:09.338780   10207 start.go:360] acquireMachinesLock for false-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:09.338811   10207 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "false-921000"
	I0807 11:10:09.338821   10207 start.go:93] Provisioning new machine with config: &{Name:false-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:09.338869   10207 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:09.343279   10207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:09.359072   10207 start.go:159] libmachine.API.Create for "false-921000" (driver="qemu2")
	I0807 11:10:09.359100   10207 client.go:168] LocalClient.Create starting
	I0807 11:10:09.359163   10207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:09.359196   10207 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:09.359205   10207 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:09.359240   10207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:09.359263   10207 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:09.359271   10207 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:09.359673   10207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:09.513538   10207 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:09.580736   10207 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:09.580745   10207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:09.580947   10207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:09.590170   10207 main.go:141] libmachine: STDOUT: 
	I0807 11:10:09.590191   10207 main.go:141] libmachine: STDERR: 
	I0807 11:10:09.590241   10207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2 +20000M
	I0807 11:10:09.598261   10207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:09.598275   10207 main.go:141] libmachine: STDERR: 
	I0807 11:10:09.598288   10207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:09.598293   10207 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:09.598306   10207 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:09.598333   10207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:a6:ea:e1:59:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:09.599911   10207 main.go:141] libmachine: STDOUT: 
	I0807 11:10:09.599927   10207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:09.599945   10207 client.go:171] duration metric: took 240.841917ms to LocalClient.Create
	I0807 11:10:11.602105   10207 start.go:128] duration metric: took 2.263227541s to createHost
	I0807 11:10:11.602233   10207 start.go:83] releasing machines lock for "false-921000", held for 2.263444333s
	W0807 11:10:11.602310   10207 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:11.621731   10207 out.go:177] * Deleting "false-921000" in qemu2 ...
	W0807 11:10:11.649635   10207 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:11.649664   10207 start.go:729] Will try again in 5 seconds ...
	I0807 11:10:16.651767   10207 start.go:360] acquireMachinesLock for false-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:16.651993   10207 start.go:364] duration metric: took 180.417µs to acquireMachinesLock for "false-921000"
	I0807 11:10:16.652050   10207 start.go:93] Provisioning new machine with config: &{Name:false-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:16.652167   10207 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:16.661507   10207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:16.681203   10207 start.go:159] libmachine.API.Create for "false-921000" (driver="qemu2")
	I0807 11:10:16.681232   10207 client.go:168] LocalClient.Create starting
	I0807 11:10:16.681314   10207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:16.681351   10207 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:16.681363   10207 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:16.681396   10207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:16.681420   10207 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:16.681426   10207 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:16.681755   10207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:16.834731   10207 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:16.938590   10207 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:16.938596   10207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:16.938819   10207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:16.949012   10207 main.go:141] libmachine: STDOUT: 
	I0807 11:10:16.949035   10207 main.go:141] libmachine: STDERR: 
	I0807 11:10:16.949110   10207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2 +20000M
	I0807 11:10:16.958625   10207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:16.958651   10207 main.go:141] libmachine: STDERR: 
	I0807 11:10:16.958665   10207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:16.958672   10207 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:16.958683   10207 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:16.958720   10207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b5:cc:83:32:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/false-921000/disk.qcow2
	I0807 11:10:16.960784   10207 main.go:141] libmachine: STDOUT: 
	I0807 11:10:16.960802   10207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:16.960814   10207 client.go:171] duration metric: took 279.58275ms to LocalClient.Create
	I0807 11:10:18.962944   10207 start.go:128] duration metric: took 2.31074275s to createHost
	I0807 11:10:18.962978   10207 start.go:83] releasing machines lock for "false-921000", held for 2.31100975s
	W0807 11:10:18.963142   10207 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:18.974483   10207 out.go:177] 
	W0807 11:10:18.978593   10207 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:10:18.978604   10207 out.go:239] * 
	* 
	W0807 11:10:18.979565   10207 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:10:18.991477   10207 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.811234125s)

                                                
                                                
-- stdout --
	* [kindnet-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-921000" primary control-plane node in "kindnet-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:10:21.177129   10318 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:10:21.177273   10318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:21.177276   10318 out.go:304] Setting ErrFile to fd 2...
	I0807 11:10:21.177281   10318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:21.177414   10318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:10:21.178541   10318 out.go:298] Setting JSON to false
	I0807 11:10:21.195308   10318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5990,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:10:21.195400   10318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:10:21.200388   10318 out.go:177] * [kindnet-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:10:21.208311   10318 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:10:21.208344   10318 notify.go:220] Checking for updates...
	I0807 11:10:21.213759   10318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:10:21.217277   10318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:10:21.220342   10318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:10:21.223344   10318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:10:21.226344   10318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:10:21.229657   10318 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:10:21.229720   10318 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:10:21.229764   10318 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:10:21.234295   10318 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:10:21.241298   10318 start.go:297] selected driver: qemu2
	I0807 11:10:21.241305   10318 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:10:21.241310   10318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:10:21.243500   10318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:10:21.246280   10318 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:10:21.249318   10318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:10:21.249334   10318 cni.go:84] Creating CNI manager for "kindnet"
	I0807 11:10:21.249338   10318 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 11:10:21.249367   10318 start.go:340] cluster config:
	{Name:kindnet-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:10:21.252908   10318 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:10:21.260185   10318 out.go:177] * Starting "kindnet-921000" primary control-plane node in "kindnet-921000" cluster
	I0807 11:10:21.264292   10318 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:10:21.264310   10318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:10:21.264325   10318 cache.go:56] Caching tarball of preloaded images
	I0807 11:10:21.264393   10318 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:10:21.264398   10318 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:10:21.264464   10318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kindnet-921000/config.json ...
	I0807 11:10:21.264475   10318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kindnet-921000/config.json: {Name:mk33a6d099c02dc15b0cc30d5fd18a53f2825b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:10:21.264870   10318 start.go:360] acquireMachinesLock for kindnet-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:21.264909   10318 start.go:364] duration metric: took 32.042µs to acquireMachinesLock for "kindnet-921000"
	I0807 11:10:21.264921   10318 start.go:93] Provisioning new machine with config: &{Name:kindnet-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:21.264979   10318 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:21.273328   10318 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:21.291030   10318 start.go:159] libmachine.API.Create for "kindnet-921000" (driver="qemu2")
	I0807 11:10:21.291060   10318 client.go:168] LocalClient.Create starting
	I0807 11:10:21.291121   10318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:21.291150   10318 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:21.291164   10318 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:21.291205   10318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:21.291227   10318 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:21.291236   10318 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:21.291640   10318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:21.445501   10318 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:21.527357   10318 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:21.527362   10318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:21.527585   10318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:21.537100   10318 main.go:141] libmachine: STDOUT: 
	I0807 11:10:21.537116   10318 main.go:141] libmachine: STDERR: 
	I0807 11:10:21.537166   10318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2 +20000M
	I0807 11:10:21.545058   10318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:21.545071   10318 main.go:141] libmachine: STDERR: 
	I0807 11:10:21.545089   10318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:21.545094   10318 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:21.545111   10318 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:21.545140   10318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:90:f4:a2:52:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:21.546765   10318 main.go:141] libmachine: STDOUT: 
	I0807 11:10:21.546781   10318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:21.546800   10318 client.go:171] duration metric: took 255.737666ms to LocalClient.Create
	I0807 11:10:23.548972   10318 start.go:128] duration metric: took 2.283996708s to createHost
	I0807 11:10:23.549037   10318 start.go:83] releasing machines lock for "kindnet-921000", held for 2.284150375s
	W0807 11:10:23.549103   10318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:23.562244   10318 out.go:177] * Deleting "kindnet-921000" in qemu2 ...
	W0807 11:10:23.591078   10318 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:23.591120   10318 start.go:729] Will try again in 5 seconds ...
	I0807 11:10:28.593242   10318 start.go:360] acquireMachinesLock for kindnet-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:28.593851   10318 start.go:364] duration metric: took 460.375µs to acquireMachinesLock for "kindnet-921000"
	I0807 11:10:28.593976   10318 start.go:93] Provisioning new machine with config: &{Name:kindnet-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:28.594249   10318 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:28.603640   10318 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:28.645926   10318 start.go:159] libmachine.API.Create for "kindnet-921000" (driver="qemu2")
	I0807 11:10:28.645972   10318 client.go:168] LocalClient.Create starting
	I0807 11:10:28.646083   10318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:28.646139   10318 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:28.646158   10318 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:28.646223   10318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:28.646263   10318 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:28.646279   10318 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:28.648295   10318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:28.821194   10318 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:28.897054   10318 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:28.897059   10318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:28.897298   10318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:28.906558   10318 main.go:141] libmachine: STDOUT: 
	I0807 11:10:28.906574   10318 main.go:141] libmachine: STDERR: 
	I0807 11:10:28.906616   10318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2 +20000M
	I0807 11:10:28.914475   10318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:28.914489   10318 main.go:141] libmachine: STDERR: 
	I0807 11:10:28.914497   10318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:28.914501   10318 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:28.914511   10318 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:28.914538   10318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:0a:c1:2f:f9:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kindnet-921000/disk.qcow2
	I0807 11:10:28.916255   10318 main.go:141] libmachine: STDOUT: 
	I0807 11:10:28.916269   10318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:28.916281   10318 client.go:171] duration metric: took 270.30525ms to LocalClient.Create
	I0807 11:10:30.918443   10318 start.go:128] duration metric: took 2.324198458s to createHost
	I0807 11:10:30.918530   10318 start.go:83] releasing machines lock for "kindnet-921000", held for 2.324689708s
	W0807 11:10:30.918971   10318 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:30.930573   10318 out.go:177] 
	W0807 11:10:30.935519   10318 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:10:30.935542   10318 out.go:239] * 
	* 
	W0807 11:10:30.938604   10318 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:10:30.947543   10318 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.808509875s)

                                                
                                                
-- stdout --
	* [flannel-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-921000" primary control-plane node in "flannel-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:10:33.218029   10433 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:10:33.218192   10433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:33.218201   10433 out.go:304] Setting ErrFile to fd 2...
	I0807 11:10:33.218203   10433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:33.218340   10433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:10:33.219429   10433 out.go:298] Setting JSON to false
	I0807 11:10:33.236067   10433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6002,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:10:33.236193   10433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:10:33.250347   10433 out.go:177] * [flannel-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:10:33.257701   10433 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:10:33.257736   10433 notify.go:220] Checking for updates...
	I0807 11:10:33.265621   10433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:10:33.268576   10433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:10:33.271589   10433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:10:33.274684   10433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:10:33.277567   10433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:10:33.280914   10433 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:10:33.280985   10433 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:10:33.281037   10433 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:10:33.284659   10433 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:10:33.291618   10433 start.go:297] selected driver: qemu2
	I0807 11:10:33.291625   10433 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:10:33.291633   10433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:10:33.293959   10433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:10:33.309657   10433 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:10:33.312745   10433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:10:33.312775   10433 cni.go:84] Creating CNI manager for "flannel"
	I0807 11:10:33.312780   10433 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0807 11:10:33.312812   10433 start.go:340] cluster config:
	{Name:flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:10:33.316663   10433 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:10:33.324663   10433 out.go:177] * Starting "flannel-921000" primary control-plane node in "flannel-921000" cluster
	I0807 11:10:33.328576   10433 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:10:33.328594   10433 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:10:33.328604   10433 cache.go:56] Caching tarball of preloaded images
	I0807 11:10:33.328665   10433 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:10:33.328672   10433 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:10:33.328732   10433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/flannel-921000/config.json ...
	I0807 11:10:33.328749   10433 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/flannel-921000/config.json: {Name:mkbf3991f24c6d39991e6c2f3baa1695c82e74ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:10:33.329069   10433 start.go:360] acquireMachinesLock for flannel-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:33.329109   10433 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "flannel-921000"
	I0807 11:10:33.329120   10433 start.go:93] Provisioning new machine with config: &{Name:flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:33.329152   10433 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:33.337595   10433 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:33.354955   10433 start.go:159] libmachine.API.Create for "flannel-921000" (driver="qemu2")
	I0807 11:10:33.354990   10433 client.go:168] LocalClient.Create starting
	I0807 11:10:33.355069   10433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:33.355101   10433 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:33.355112   10433 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:33.355158   10433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:33.355182   10433 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:33.355191   10433 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:33.355590   10433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:33.510306   10433 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:33.609444   10433 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:33.609450   10433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:33.609672   10433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:33.619053   10433 main.go:141] libmachine: STDOUT: 
	I0807 11:10:33.619075   10433 main.go:141] libmachine: STDERR: 
	I0807 11:10:33.619150   10433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2 +20000M
	I0807 11:10:33.627243   10433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:33.627258   10433 main.go:141] libmachine: STDERR: 
	I0807 11:10:33.627283   10433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:33.627290   10433 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:33.627300   10433 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:33.627330   10433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4a:34:84:62:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:33.629116   10433 main.go:141] libmachine: STDOUT: 
	I0807 11:10:33.629136   10433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:33.629158   10433 client.go:171] duration metric: took 274.166542ms to LocalClient.Create
	I0807 11:10:35.631363   10433 start.go:128] duration metric: took 2.302223042s to createHost
	I0807 11:10:35.631416   10433 start.go:83] releasing machines lock for "flannel-921000", held for 2.302328958s
	W0807 11:10:35.631478   10433 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:35.645641   10433 out.go:177] * Deleting "flannel-921000" in qemu2 ...
	W0807 11:10:35.674747   10433 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:35.674782   10433 start.go:729] Will try again in 5 seconds ...
	I0807 11:10:40.674791   10433 start.go:360] acquireMachinesLock for flannel-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:40.674898   10433 start.go:364] duration metric: took 90.667µs to acquireMachinesLock for "flannel-921000"
	I0807 11:10:40.674909   10433 start.go:93] Provisioning new machine with config: &{Name:flannel-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:40.674981   10433 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:40.682104   10433 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:40.698853   10433 start.go:159] libmachine.API.Create for "flannel-921000" (driver="qemu2")
	I0807 11:10:40.698905   10433 client.go:168] LocalClient.Create starting
	I0807 11:10:40.698981   10433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:40.699026   10433 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:40.699035   10433 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:40.699070   10433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:40.699095   10433 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:40.699101   10433 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:40.699427   10433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:40.853330   10433 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:40.935622   10433 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:40.935634   10433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:40.935878   10433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:40.946316   10433 main.go:141] libmachine: STDOUT: 
	I0807 11:10:40.946338   10433 main.go:141] libmachine: STDERR: 
	I0807 11:10:40.946405   10433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2 +20000M
	I0807 11:10:40.955239   10433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:40.955259   10433 main.go:141] libmachine: STDERR: 
	I0807 11:10:40.955275   10433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:40.955278   10433 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:40.955289   10433 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:40.955318   10433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b1:3c:e2:a8:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/flannel-921000/disk.qcow2
	I0807 11:10:40.957196   10433 main.go:141] libmachine: STDOUT: 
	I0807 11:10:40.957213   10433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:40.957225   10433 client.go:171] duration metric: took 258.319708ms to LocalClient.Create
	I0807 11:10:42.959384   10433 start.go:128] duration metric: took 2.284405042s to createHost
	I0807 11:10:42.959447   10433 start.go:83] releasing machines lock for "flannel-921000", held for 2.284572167s
	W0807 11:10:42.959800   10433 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:42.972398   10433 out.go:177] 
	W0807 11:10:42.975451   10433 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:10:42.975476   10433 out.go:239] * 
	* 
	W0807 11:10:42.977082   10433 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:10:42.987411   10433 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.823940458s)

                                                
                                                
-- stdout --
	* [enable-default-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-921000" primary control-plane node in "enable-default-cni-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:10:45.311463   10550 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:10:45.311578   10550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:45.311581   10550 out.go:304] Setting ErrFile to fd 2...
	I0807 11:10:45.311583   10550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:45.311703   10550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:10:45.312755   10550 out.go:298] Setting JSON to false
	I0807 11:10:45.329131   10550 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6014,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:10:45.329198   10550 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:10:45.335861   10550 out.go:177] * [enable-default-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:10:45.342850   10550 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:10:45.342935   10550 notify.go:220] Checking for updates...
	I0807 11:10:45.349756   10550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:10:45.352818   10550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:10:45.355853   10550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:10:45.358763   10550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:10:45.361826   10550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:10:45.365223   10550 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:10:45.365298   10550 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:10:45.365348   10550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:10:45.369774   10550 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:10:45.376835   10550 start.go:297] selected driver: qemu2
	I0807 11:10:45.376842   10550 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:10:45.376850   10550 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:10:45.379165   10550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:10:45.381833   10550 out.go:177] * Automatically selected the socket_vmnet network
	E0807 11:10:45.384906   10550 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0807 11:10:45.384920   10550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:10:45.384939   10550 cni.go:84] Creating CNI manager for "bridge"
	I0807 11:10:45.384945   10550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:10:45.384973   10550 start.go:340] cluster config:
	{Name:enable-default-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:10:45.388539   10550 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:10:45.402779   10550 out.go:177] * Starting "enable-default-cni-921000" primary control-plane node in "enable-default-cni-921000" cluster
	I0807 11:10:45.406895   10550 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:10:45.406916   10550 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:10:45.406924   10550 cache.go:56] Caching tarball of preloaded images
	I0807 11:10:45.406985   10550 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:10:45.406991   10550 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:10:45.407063   10550 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/enable-default-cni-921000/config.json ...
	I0807 11:10:45.407076   10550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/enable-default-cni-921000/config.json: {Name:mk44d0e2b3d50419ecda730f6e5b8f671fdafd24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:10:45.407309   10550 start.go:360] acquireMachinesLock for enable-default-cni-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:45.407346   10550 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "enable-default-cni-921000"
	I0807 11:10:45.407356   10550 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:45.407385   10550 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:45.415805   10550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:45.431347   10550 start.go:159] libmachine.API.Create for "enable-default-cni-921000" (driver="qemu2")
	I0807 11:10:45.431376   10550 client.go:168] LocalClient.Create starting
	I0807 11:10:45.431437   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:45.431469   10550 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:45.431480   10550 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:45.431517   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:45.431540   10550 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:45.431546   10550 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:45.431966   10550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:45.585256   10550 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:45.708792   10550 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:45.708798   10550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:45.709009   10550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:45.718288   10550 main.go:141] libmachine: STDOUT: 
	I0807 11:10:45.718309   10550 main.go:141] libmachine: STDERR: 
	I0807 11:10:45.718357   10550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2 +20000M
	I0807 11:10:45.726383   10550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:45.726397   10550 main.go:141] libmachine: STDERR: 
	I0807 11:10:45.726425   10550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:45.726430   10550 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:45.726441   10550 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:45.726471   10550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:52:68:65:3b:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:45.728113   10550 main.go:141] libmachine: STDOUT: 
	I0807 11:10:45.728129   10550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:45.728149   10550 client.go:171] duration metric: took 296.77175ms to LocalClient.Create
	I0807 11:10:47.730309   10550 start.go:128] duration metric: took 2.3229355s to createHost
	I0807 11:10:47.730363   10550 start.go:83] releasing machines lock for "enable-default-cni-921000", held for 2.323043084s
	W0807 11:10:47.730434   10550 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:47.743200   10550 out.go:177] * Deleting "enable-default-cni-921000" in qemu2 ...
	W0807 11:10:47.766972   10550 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:47.766997   10550 start.go:729] Will try again in 5 seconds ...
	I0807 11:10:52.769096   10550 start.go:360] acquireMachinesLock for enable-default-cni-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:52.769747   10550 start.go:364] duration metric: took 554.125µs to acquireMachinesLock for "enable-default-cni-921000"
	I0807 11:10:52.769810   10550 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:52.770119   10550 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:52.777741   10550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:52.830256   10550 start.go:159] libmachine.API.Create for "enable-default-cni-921000" (driver="qemu2")
	I0807 11:10:52.830304   10550 client.go:168] LocalClient.Create starting
	I0807 11:10:52.830436   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:52.830506   10550 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:52.830525   10550 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:52.830585   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:52.830629   10550 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:52.830640   10550 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:52.831182   10550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:52.996400   10550 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:53.046000   10550 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:53.046008   10550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:53.046233   10550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:53.055361   10550 main.go:141] libmachine: STDOUT: 
	I0807 11:10:53.055378   10550 main.go:141] libmachine: STDERR: 
	I0807 11:10:53.055435   10550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2 +20000M
	I0807 11:10:53.063350   10550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:53.063366   10550 main.go:141] libmachine: STDERR: 
	I0807 11:10:53.063383   10550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:53.063389   10550 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:53.063398   10550 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:53.063428   10550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:34:5e:68:90:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/enable-default-cni-921000/disk.qcow2
	I0807 11:10:53.065142   10550 main.go:141] libmachine: STDOUT: 
	I0807 11:10:53.065157   10550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:53.065170   10550 client.go:171] duration metric: took 234.86275ms to LocalClient.Create
	I0807 11:10:55.067332   10550 start.go:128] duration metric: took 2.297198s to createHost
	I0807 11:10:55.067407   10550 start.go:83] releasing machines lock for "enable-default-cni-921000", held for 2.29766875s
	W0807 11:10:55.067722   10550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:55.077439   10550 out.go:177] 
	W0807 11:10:55.081273   10550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:10:55.081290   10550 out.go:239] * 
	* 
	W0807 11:10:55.083075   10550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:10:55.093386   10550 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.779652375s)

                                                
                                                
-- stdout --
	* [bridge-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-921000" primary control-plane node in "bridge-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:10:57.262268   10664 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:10:57.262400   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:57.262403   10664 out.go:304] Setting ErrFile to fd 2...
	I0807 11:10:57.262406   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:10:57.262540   10664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:10:57.263528   10664 out.go:298] Setting JSON to false
	I0807 11:10:57.279844   10664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6026,"bootTime":1723048231,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:10:57.279915   10664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:10:57.286803   10664 out.go:177] * [bridge-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:10:57.293581   10664 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:10:57.293629   10664 notify.go:220] Checking for updates...
	I0807 11:10:57.301543   10664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:10:57.305559   10664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:10:57.308541   10664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:10:57.311583   10664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:10:57.314635   10664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:10:57.317863   10664 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:10:57.317931   10664 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:10:57.317975   10664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:10:57.322547   10664 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:10:57.329513   10664 start.go:297] selected driver: qemu2
	I0807 11:10:57.329520   10664 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:10:57.329527   10664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:10:57.331986   10664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:10:57.334519   10664 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:10:57.337608   10664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:10:57.337643   10664 cni.go:84] Creating CNI manager for "bridge"
	I0807 11:10:57.337647   10664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:10:57.337684   10664 start.go:340] cluster config:
	{Name:bridge-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:10:57.341219   10664 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:10:57.348558   10664 out.go:177] * Starting "bridge-921000" primary control-plane node in "bridge-921000" cluster
	I0807 11:10:57.352595   10664 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:10:57.352607   10664 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:10:57.352616   10664 cache.go:56] Caching tarball of preloaded images
	I0807 11:10:57.352676   10664 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:10:57.352688   10664 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:10:57.352742   10664 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/bridge-921000/config.json ...
	I0807 11:10:57.352752   10664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/bridge-921000/config.json: {Name:mk12c481b368051e085d535c5df21784e30cd855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:10:57.353029   10664 start.go:360] acquireMachinesLock for bridge-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:10:57.353059   10664 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "bridge-921000"
	I0807 11:10:57.353068   10664 start.go:93] Provisioning new machine with config: &{Name:bridge-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:10:57.353128   10664 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:10:57.357572   10664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:10:57.373520   10664 start.go:159] libmachine.API.Create for "bridge-921000" (driver="qemu2")
	I0807 11:10:57.373551   10664 client.go:168] LocalClient.Create starting
	I0807 11:10:57.373623   10664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:10:57.373653   10664 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:57.373663   10664 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:57.373705   10664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:10:57.373728   10664 main.go:141] libmachine: Decoding PEM data...
	I0807 11:10:57.373740   10664 main.go:141] libmachine: Parsing certificate...
	I0807 11:10:57.374133   10664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:10:57.528135   10664 main.go:141] libmachine: Creating SSH key...
	I0807 11:10:57.664198   10664 main.go:141] libmachine: Creating Disk image...
	I0807 11:10:57.664206   10664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:10:57.664430   10664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:10:57.674105   10664 main.go:141] libmachine: STDOUT: 
	I0807 11:10:57.674121   10664 main.go:141] libmachine: STDERR: 
	I0807 11:10:57.674161   10664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2 +20000M
	I0807 11:10:57.682485   10664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:10:57.682499   10664 main.go:141] libmachine: STDERR: 
	I0807 11:10:57.682510   10664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:10:57.682516   10664 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:10:57.682525   10664 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:10:57.682550   10664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:49:51:8e:54:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:10:57.684284   10664 main.go:141] libmachine: STDOUT: 
	I0807 11:10:57.684298   10664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:10:57.684318   10664 client.go:171] duration metric: took 310.766792ms to LocalClient.Create
	I0807 11:10:59.686469   10664 start.go:128] duration metric: took 2.333350291s to createHost
	I0807 11:10:59.686563   10664 start.go:83] releasing machines lock for "bridge-921000", held for 2.333529292s
	W0807 11:10:59.686609   10664 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:59.693426   10664 out.go:177] * Deleting "bridge-921000" in qemu2 ...
	W0807 11:10:59.722934   10664 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:10:59.722982   10664 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:04.725061   10664 start.go:360] acquireMachinesLock for bridge-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:04.725283   10664 start.go:364] duration metric: took 188.958µs to acquireMachinesLock for "bridge-921000"
	I0807 11:11:04.725322   10664 start.go:93] Provisioning new machine with config: &{Name:bridge-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:04.725407   10664 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:04.734617   10664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:11:04.752109   10664 start.go:159] libmachine.API.Create for "bridge-921000" (driver="qemu2")
	I0807 11:11:04.752134   10664 client.go:168] LocalClient.Create starting
	I0807 11:11:04.752191   10664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:04.752227   10664 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:04.752237   10664 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:04.752280   10664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:04.752303   10664 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:04.752309   10664 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:04.752829   10664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:04.907504   10664 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:04.949479   10664 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:04.949485   10664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:04.949710   10664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:11:04.959398   10664 main.go:141] libmachine: STDOUT: 
	I0807 11:11:04.959418   10664 main.go:141] libmachine: STDERR: 
	I0807 11:11:04.959480   10664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2 +20000M
	I0807 11:11:04.967695   10664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:04.967710   10664 main.go:141] libmachine: STDERR: 
	I0807 11:11:04.967753   10664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:11:04.967759   10664 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:04.967768   10664 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:04.967791   10664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:eb:8d:58:0a:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/bridge-921000/disk.qcow2
	I0807 11:11:04.969555   10664 main.go:141] libmachine: STDOUT: 
	I0807 11:11:04.969570   10664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:04.969582   10664 client.go:171] duration metric: took 217.445959ms to LocalClient.Create
	I0807 11:11:06.971664   10664 start.go:128] duration metric: took 2.246269334s to createHost
	I0807 11:11:06.971735   10664 start.go:83] releasing machines lock for "bridge-921000", held for 2.246459625s
	W0807 11:11:06.971955   10664 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:06.987472   10664 out.go:177] 
	W0807 11:11:06.990612   10664 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:06.990628   10664 out.go:239] * 
	* 
	W0807 11:11:06.992250   10664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:06.999480   10664 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-921000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.0749485s)

                                                
                                                
-- stdout --
	* [kubenet-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-921000" primary control-plane node in "kubenet-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:09.204425   10773 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:09.204548   10773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:09.204551   10773 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:09.204554   10773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:09.204701   10773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:09.205801   10773 out.go:298] Setting JSON to false
	I0807 11:11:09.222077   10773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6038,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:09.222148   10773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:09.226704   10773 out.go:177] * [kubenet-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:09.233735   10773 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:09.233764   10773 notify.go:220] Checking for updates...
	I0807 11:11:09.240692   10773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:09.243708   10773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:09.246684   10773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:09.249707   10773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:09.252744   10773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:09.254518   10773 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:09.254582   10773 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:11:09.254634   10773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:09.258695   10773 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:11:09.265591   10773 start.go:297] selected driver: qemu2
	I0807 11:11:09.265597   10773 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:11:09.265605   10773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:09.267931   10773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:11:09.270720   10773 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:11:09.273811   10773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:09.273857   10773 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0807 11:11:09.273909   10773 start.go:340] cluster config:
	{Name:kubenet-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:09.277415   10773 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:09.284706   10773 out.go:177] * Starting "kubenet-921000" primary control-plane node in "kubenet-921000" cluster
	I0807 11:11:09.288706   10773 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:11:09.288723   10773 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:11:09.288735   10773 cache.go:56] Caching tarball of preloaded images
	I0807 11:11:09.288789   10773 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:11:09.288795   10773 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:11:09.288869   10773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kubenet-921000/config.json ...
	I0807 11:11:09.288880   10773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/kubenet-921000/config.json: {Name:mk3335fb1a228fb2901474400ae1dd09327c3a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:11:09.289092   10773 start.go:360] acquireMachinesLock for kubenet-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:09.289123   10773 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "kubenet-921000"
	I0807 11:11:09.289146   10773 start.go:93] Provisioning new machine with config: &{Name:kubenet-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:09.289175   10773 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:09.297766   10773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:11:09.313558   10773 start.go:159] libmachine.API.Create for "kubenet-921000" (driver="qemu2")
	I0807 11:11:09.313585   10773 client.go:168] LocalClient.Create starting
	I0807 11:11:09.313652   10773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:09.313683   10773 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:09.313693   10773 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:09.313735   10773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:09.313758   10773 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:09.313766   10773 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:09.314087   10773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:09.468479   10773 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:09.713077   10773 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:09.713092   10773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:09.713393   10773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:09.723246   10773 main.go:141] libmachine: STDOUT: 
	I0807 11:11:09.723275   10773 main.go:141] libmachine: STDERR: 
	I0807 11:11:09.723335   10773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2 +20000M
	I0807 11:11:09.731351   10773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:09.731363   10773 main.go:141] libmachine: STDERR: 
	I0807 11:11:09.731381   10773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:09.731387   10773 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:09.731401   10773 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:09.731425   10773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:2e:8b:c9:bd:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:09.733084   10773 main.go:141] libmachine: STDOUT: 
	I0807 11:11:09.733097   10773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:09.733115   10773 client.go:171] duration metric: took 419.532959ms to LocalClient.Create
	I0807 11:11:11.735181   10773 start.go:128] duration metric: took 2.446026833s to createHost
	I0807 11:11:11.735209   10773 start.go:83] releasing machines lock for "kubenet-921000", held for 2.44611675s
	W0807 11:11:11.735255   10773 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:11.750059   10773 out.go:177] * Deleting "kubenet-921000" in qemu2 ...
	W0807 11:11:11.766570   10773 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:11.766578   10773 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:16.768776   10773 start.go:360] acquireMachinesLock for kubenet-921000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:16.769320   10773 start.go:364] duration metric: took 433.208µs to acquireMachinesLock for "kubenet-921000"
	I0807 11:11:16.769454   10773 start.go:93] Provisioning new machine with config: &{Name:kubenet-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:16.769701   10773 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:16.778321   10773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 11:11:16.822632   10773 start.go:159] libmachine.API.Create for "kubenet-921000" (driver="qemu2")
	I0807 11:11:16.822715   10773 client.go:168] LocalClient.Create starting
	I0807 11:11:16.822906   10773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:16.823001   10773 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:16.823021   10773 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:16.823090   10773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:16.823141   10773 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:16.823155   10773 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:16.823672   10773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:16.985936   10773 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:17.183648   10773 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:17.183663   10773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:17.183881   10773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:17.193411   10773 main.go:141] libmachine: STDOUT: 
	I0807 11:11:17.193434   10773 main.go:141] libmachine: STDERR: 
	I0807 11:11:17.193485   10773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2 +20000M
	I0807 11:11:17.201706   10773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:17.201738   10773 main.go:141] libmachine: STDERR: 
	I0807 11:11:17.201754   10773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:17.201760   10773 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:17.201767   10773 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:17.201811   10773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:c5:aa:4b:37:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/kubenet-921000/disk.qcow2
	I0807 11:11:17.203451   10773 main.go:141] libmachine: STDOUT: 
	I0807 11:11:17.203464   10773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:17.203481   10773 client.go:171] duration metric: took 380.756583ms to LocalClient.Create
	I0807 11:11:19.205656   10773 start.go:128] duration metric: took 2.435923542s to createHost
	I0807 11:11:19.205835   10773 start.go:83] releasing machines lock for "kubenet-921000", held for 2.436502167s
	W0807 11:11:19.206059   10773 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:19.219852   10773 out.go:177] 
	W0807 11:11:19.223950   10773 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:19.223977   10773 out.go:239] * 
	* 
	W0807 11:11:19.225719   10773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:19.236929   10773 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.755401334s)

                                                
                                                
-- stdout --
	* [old-k8s-version-107000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-107000" primary control-plane node in "old-k8s-version-107000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-107000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:21.576991   10891 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:21.577146   10891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:21.577150   10891 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:21.577153   10891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:21.577304   10891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:21.578687   10891 out.go:298] Setting JSON to false
	I0807 11:11:21.595708   10891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6050,"bootTime":1723048231,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:21.595780   10891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:21.602204   10891 out.go:177] * [old-k8s-version-107000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:21.609194   10891 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:21.609262   10891 notify.go:220] Checking for updates...
	I0807 11:11:21.616057   10891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:21.619178   10891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:21.622115   10891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:21.625147   10891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:21.628175   10891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:21.629928   10891 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:21.630007   10891 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:11:21.630076   10891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:21.634096   10891 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:11:21.640967   10891 start.go:297] selected driver: qemu2
	I0807 11:11:21.640972   10891 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:11:21.640978   10891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:21.643094   10891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:11:21.646135   10891 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:11:21.649183   10891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:21.649220   10891 cni.go:84] Creating CNI manager for ""
	I0807 11:11:21.649226   10891 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 11:11:21.649257   10891 start.go:340] cluster config:
	{Name:old-k8s-version-107000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-107000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:21.652561   10891 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:21.656073   10891 out.go:177] * Starting "old-k8s-version-107000" primary control-plane node in "old-k8s-version-107000" cluster
	I0807 11:11:21.664149   10891 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 11:11:21.664167   10891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 11:11:21.664175   10891 cache.go:56] Caching tarball of preloaded images
	I0807 11:11:21.664233   10891 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:11:21.664238   10891 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 11:11:21.664300   10891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/old-k8s-version-107000/config.json ...
	I0807 11:11:21.664312   10891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/old-k8s-version-107000/config.json: {Name:mk4717f2a34402cc3f91012c73e7e261e5dd71ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:11:21.664585   10891 start.go:360] acquireMachinesLock for old-k8s-version-107000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:21.664617   10891 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "old-k8s-version-107000"
	I0807 11:11:21.664626   10891 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-107000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-107000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:21.664650   10891 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:21.668160   10891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:21.683350   10891 start.go:159] libmachine.API.Create for "old-k8s-version-107000" (driver="qemu2")
	I0807 11:11:21.683379   10891 client.go:168] LocalClient.Create starting
	I0807 11:11:21.683443   10891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:21.683472   10891 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:21.683480   10891 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:21.683523   10891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:21.683546   10891 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:21.683553   10891 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:21.683885   10891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:21.839307   10891 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:21.897695   10891 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:21.897700   10891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:21.897922   10891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:21.907504   10891 main.go:141] libmachine: STDOUT: 
	I0807 11:11:21.907527   10891 main.go:141] libmachine: STDERR: 
	I0807 11:11:21.907588   10891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2 +20000M
	I0807 11:11:21.915540   10891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:21.915557   10891 main.go:141] libmachine: STDERR: 
	I0807 11:11:21.915582   10891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:21.915587   10891 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:21.915599   10891 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:21.915623   10891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7b:76:88:44:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:21.917271   10891 main.go:141] libmachine: STDOUT: 
	I0807 11:11:21.917287   10891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:21.917305   10891 client.go:171] duration metric: took 233.924958ms to LocalClient.Create
	I0807 11:11:23.919626   10891 start.go:128] duration metric: took 2.254977291s to createHost
	I0807 11:11:23.919696   10891 start.go:83] releasing machines lock for "old-k8s-version-107000", held for 2.255102834s
	W0807 11:11:23.919758   10891 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:23.931017   10891 out.go:177] * Deleting "old-k8s-version-107000" in qemu2 ...
	W0807 11:11:23.962185   10891 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:23.962221   10891 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:28.964332   10891 start.go:360] acquireMachinesLock for old-k8s-version-107000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:28.964808   10891 start.go:364] duration metric: took 395.667µs to acquireMachinesLock for "old-k8s-version-107000"
	I0807 11:11:28.964967   10891 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-107000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-107000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:28.965157   10891 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:28.974984   10891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:29.017147   10891 start.go:159] libmachine.API.Create for "old-k8s-version-107000" (driver="qemu2")
	I0807 11:11:29.017198   10891 client.go:168] LocalClient.Create starting
	I0807 11:11:29.017317   10891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:29.017392   10891 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:29.017408   10891 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:29.017474   10891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:29.017518   10891 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:29.017527   10891 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:29.018051   10891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:29.179649   10891 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:29.242430   10891 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:29.242438   10891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:29.242653   10891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:29.252539   10891 main.go:141] libmachine: STDOUT: 
	I0807 11:11:29.252561   10891 main.go:141] libmachine: STDERR: 
	I0807 11:11:29.252611   10891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2 +20000M
	I0807 11:11:29.260514   10891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:29.260534   10891 main.go:141] libmachine: STDERR: 
	I0807 11:11:29.260543   10891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:29.260556   10891 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:29.260563   10891 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:29.260590   10891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:69:1b:03:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:29.262322   10891 main.go:141] libmachine: STDOUT: 
	I0807 11:11:29.262336   10891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:29.262348   10891 client.go:171] duration metric: took 245.148833ms to LocalClient.Create
	I0807 11:11:31.264520   10891 start.go:128] duration metric: took 2.299352875s to createHost
	I0807 11:11:31.264604   10891 start.go:83] releasing machines lock for "old-k8s-version-107000", held for 2.299784792s
	W0807 11:11:31.265022   10891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-107000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-107000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:31.274606   10891 out.go:177] 
	W0807 11:11:31.280687   10891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:31.280738   10891 out.go:239] * 
	* 
	W0807 11:11:31.283307   10891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:31.290540   10891 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (62.928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-107000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-107000 create -f testdata/busybox.yaml: exit status 1 (30.415334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-107000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-107000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (28.615958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (30.36125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-107000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-107000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-107000 describe deploy/metrics-server -n kube-system: exit status 1 (27.421875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-107000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-107000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (28.164375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185853333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-107000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-107000" primary control-plane node in "old-k8s-version-107000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-107000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-107000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:33.591535   10934 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:33.591677   10934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:33.591681   10934 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:33.591683   10934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:33.591831   10934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:33.592879   10934 out.go:298] Setting JSON to false
	I0807 11:11:33.609469   10934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6062,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:33.609546   10934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:33.614083   10934 out.go:177] * [old-k8s-version-107000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:33.620944   10934 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:33.620983   10934 notify.go:220] Checking for updates...
	I0807 11:11:33.627993   10934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:33.631063   10934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:33.633946   10934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:33.637040   10934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:33.639990   10934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:33.643284   10934 config.go:182] Loaded profile config "old-k8s-version-107000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0807 11:11:33.646941   10934 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0807 11:11:33.650056   10934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:33.654976   10934 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:11:33.661913   10934 start.go:297] selected driver: qemu2
	I0807 11:11:33.661919   10934 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-107000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-107000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:33.661986   10934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:33.664626   10934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:33.664664   10934 cni.go:84] Creating CNI manager for ""
	I0807 11:11:33.664671   10934 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 11:11:33.664700   10934 start.go:340] cluster config:
	{Name:old-k8s-version-107000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-107000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:33.668372   10934 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:33.675917   10934 out.go:177] * Starting "old-k8s-version-107000" primary control-plane node in "old-k8s-version-107000" cluster
	I0807 11:11:33.679046   10934 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 11:11:33.679074   10934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 11:11:33.679084   10934 cache.go:56] Caching tarball of preloaded images
	I0807 11:11:33.679163   10934 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:11:33.679170   10934 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 11:11:33.679228   10934 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/old-k8s-version-107000/config.json ...
	I0807 11:11:33.679632   10934 start.go:360] acquireMachinesLock for old-k8s-version-107000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:33.679662   10934 start.go:364] duration metric: took 22.25µs to acquireMachinesLock for "old-k8s-version-107000"
	I0807 11:11:33.679671   10934 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:11:33.679678   10934 fix.go:54] fixHost starting: 
	I0807 11:11:33.679795   10934 fix.go:112] recreateIfNeeded on old-k8s-version-107000: state=Stopped err=<nil>
	W0807 11:11:33.679804   10934 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:11:33.683942   10934 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-107000" ...
	I0807 11:11:33.691945   10934 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:33.691978   10934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:69:1b:03:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:33.693991   10934 main.go:141] libmachine: STDOUT: 
	I0807 11:11:33.694008   10934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:33.694034   10934 fix.go:56] duration metric: took 14.357459ms for fixHost
	I0807 11:11:33.694038   10934 start.go:83] releasing machines lock for "old-k8s-version-107000", held for 14.372ms
	W0807 11:11:33.694044   10934 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:33.694073   10934 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:33.694078   10934 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:38.695015   10934 start.go:360] acquireMachinesLock for old-k8s-version-107000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:38.695441   10934 start.go:364] duration metric: took 317.542µs to acquireMachinesLock for "old-k8s-version-107000"
	I0807 11:11:38.695516   10934 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:11:38.695533   10934 fix.go:54] fixHost starting: 
	I0807 11:11:38.696049   10934 fix.go:112] recreateIfNeeded on old-k8s-version-107000: state=Stopped err=<nil>
	W0807 11:11:38.696066   10934 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:11:38.703557   10934 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-107000" ...
	I0807 11:11:38.707516   10934 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:38.707666   10934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:69:1b:03:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/old-k8s-version-107000/disk.qcow2
	I0807 11:11:38.715327   10934 main.go:141] libmachine: STDOUT: 
	I0807 11:11:38.715386   10934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:38.715455   10934 fix.go:56] duration metric: took 19.924459ms for fixHost
	I0807 11:11:38.715470   10934 start.go:83] releasing machines lock for "old-k8s-version-107000", held for 20.006917ms
	W0807 11:11:38.715655   10934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-107000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-107000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:38.722465   10934 out.go:177] 
	W0807 11:11:38.726628   10934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:38.726659   10934 out.go:239] * 
	* 
	W0807 11:11:38.728005   10934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:38.736530   10934 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-107000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (59.29875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-107000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (30.756666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-107000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-107000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-107000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.616833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-107000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-107000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (28.589792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-107000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (28.614792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-107000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-107000 --alsologtostderr -v=1: exit status 83 (41.187166ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-107000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-107000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:38.994248   10957 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:38.995227   10957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:38.995230   10957 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:38.995233   10957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:38.995394   10957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:38.995593   10957 out.go:298] Setting JSON to false
	I0807 11:11:38.995600   10957 mustload.go:65] Loading cluster: old-k8s-version-107000
	I0807 11:11:38.995794   10957 config.go:182] Loaded profile config "old-k8s-version-107000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0807 11:11:39.000629   10957 out.go:177] * The control-plane node old-k8s-version-107000 host is not running: state=Stopped
	I0807 11:11:39.003503   10957 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-107000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-107000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (28.663791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (29.932375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.834481583s)

                                                
                                                
-- stdout --
	* [no-preload-641000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-641000" primary control-plane node in "no-preload-641000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-641000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:39.310239   10974 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:39.310372   10974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:39.310375   10974 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:39.310377   10974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:39.310502   10974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:39.311584   10974 out.go:298] Setting JSON to false
	I0807 11:11:39.327923   10974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6068,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:39.328008   10974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:39.332637   10974 out.go:177] * [no-preload-641000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:39.339608   10974 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:39.339635   10974 notify.go:220] Checking for updates...
	I0807 11:11:39.346548   10974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:39.349605   10974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:39.352559   10974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:39.355600   10974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:39.358560   10974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:39.361823   10974 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:39.361881   10974 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0807 11:11:39.361941   10974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:39.365560   10974 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:11:39.372590   10974 start.go:297] selected driver: qemu2
	I0807 11:11:39.372597   10974 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:11:39.372604   10974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:39.374964   10974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:11:39.378508   10974 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:11:39.383807   10974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:39.383827   10974 cni.go:84] Creating CNI manager for ""
	I0807 11:11:39.383836   10974 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:11:39.383840   10974 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:11:39.383880   10974 start.go:340] cluster config:
	{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:39.387495   10974 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.395614   10974 out.go:177] * Starting "no-preload-641000" primary control-plane node in "no-preload-641000" cluster
	I0807 11:11:39.399591   10974 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 11:11:39.399685   10974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/no-preload-641000/config.json ...
	I0807 11:11:39.399702   10974 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/no-preload-641000/config.json: {Name:mk574ecf3231ccdcb66b4e10830ee2f242aeb8e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:11:39.399699   10974 cache.go:107] acquiring lock: {Name:mk5e2b6546238d7c0154921386382b701b23a45a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399707   10974 cache.go:107] acquiring lock: {Name:mk4b2640aa201e24a264458b45308f582044f18e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399737   10974 cache.go:107] acquiring lock: {Name:mk6a6038128ee9746bfb635f75171628fa8461d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399740   10974 cache.go:107] acquiring lock: {Name:mkaacf585d2d3167ba4108499d686cf8eec9524a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399757   10974 cache.go:107] acquiring lock: {Name:mk9a9d919c6d46370bf5429cc34776afe2ff4b1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399783   10974 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0807 11:11:39.399788   10974 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.875µs
	I0807 11:11:39.399796   10974 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0807 11:11:39.399881   10974 cache.go:107] acquiring lock: {Name:mkfa114da60d1b0879f80afb60ab32a6c36adf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399901   10974 cache.go:107] acquiring lock: {Name:mk82d7fd6abc503641e8d7d5f6c07f8a09541f7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399909   10974 cache.go:107] acquiring lock: {Name:mka569ae296b016ecfc36e0c6198cf36dab462e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:39.399977   10974 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0807 11:11:39.400051   10974 start.go:360] acquireMachinesLock for no-preload-641000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:39.400061   10974 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0807 11:11:39.400115   10974 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0807 11:11:39.400118   10974 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0807 11:11:39.400136   10974 start.go:364] duration metric: took 80.084µs to acquireMachinesLock for "no-preload-641000"
	I0807 11:11:39.400142   10974 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0807 11:11:39.400150   10974 start.go:93] Provisioning new machine with config: &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:39.400185   10974 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:39.400124   10974 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0807 11:11:39.400310   10974 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0807 11:11:39.404580   10974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:39.411664   10974 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0807 11:11:39.412496   10974 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0807 11:11:39.412523   10974 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0807 11:11:39.412589   10974 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0807 11:11:39.413803   10974 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0807 11:11:39.413852   10974 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0807 11:11:39.413888   10974 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0807 11:11:39.421589   10974 start.go:159] libmachine.API.Create for "no-preload-641000" (driver="qemu2")
	I0807 11:11:39.421610   10974 client.go:168] LocalClient.Create starting
	I0807 11:11:39.421685   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:39.421718   10974 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:39.421729   10974 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:39.421769   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:39.421792   10974 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:39.421801   10974 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:39.422207   10974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:39.581538   10974 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:39.690855   10974 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:39.690871   10974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:39.691107   10974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:39.700730   10974 main.go:141] libmachine: STDOUT: 
	I0807 11:11:39.700746   10974 main.go:141] libmachine: STDERR: 
	I0807 11:11:39.700789   10974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2 +20000M
	I0807 11:11:39.708883   10974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:39.708898   10974 main.go:141] libmachine: STDERR: 
	I0807 11:11:39.708910   10974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:39.708914   10974 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:39.708925   10974 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:39.708953   10974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:2e:9c:ab:d3:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:39.710666   10974 main.go:141] libmachine: STDOUT: 
	I0807 11:11:39.710682   10974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:39.710698   10974 client.go:171] duration metric: took 289.089ms to LocalClient.Create
	I0807 11:11:39.774984   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0807 11:11:39.778237   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0807 11:11:39.803442   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0807 11:11:39.846404   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0807 11:11:39.883664   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0807 11:11:39.884280   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0807 11:11:39.935040   10974 cache.go:162] opening:  /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0807 11:11:39.984892   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0807 11:11:39.984912   10974 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 585.179792ms
	I0807 11:11:39.984926   10974 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0807 11:11:41.710899   10974 start.go:128] duration metric: took 2.310719417s to createHost
	I0807 11:11:41.710947   10974 start.go:83] releasing machines lock for "no-preload-641000", held for 2.310836041s
	W0807 11:11:41.711022   10974 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:41.720508   10974 out.go:177] * Deleting "no-preload-641000" in qemu2 ...
	W0807 11:11:41.742965   10974 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:41.742985   10974 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:42.174829   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0807 11:11:42.174879   10974 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 2.775155208s
	I0807 11:11:42.174940   10974 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0807 11:11:43.177843   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0807 11:11:43.177886   10974 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.778076167s
	I0807 11:11:43.177904   10974 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0807 11:11:43.940182   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0807 11:11:43.940220   10974 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 4.540585458s
	I0807 11:11:43.940234   10974 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0807 11:11:44.371571   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0807 11:11:44.371612   10974 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.971800917s
	I0807 11:11:44.371632   10974 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0807 11:11:44.860872   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0807 11:11:44.860903   10974 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 5.461276417s
	I0807 11:11:44.860915   10974 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0807 11:11:46.743639   10974 start.go:360] acquireMachinesLock for no-preload-641000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:46.743865   10974 start.go:364] duration metric: took 189.125µs to acquireMachinesLock for "no-preload-641000"
	I0807 11:11:46.743921   10974 start.go:93] Provisioning new machine with config: &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:46.743994   10974 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:46.754298   10974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:46.781218   10974 start.go:159] libmachine.API.Create for "no-preload-641000" (driver="qemu2")
	I0807 11:11:46.781265   10974 client.go:168] LocalClient.Create starting
	I0807 11:11:46.781361   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:46.781412   10974 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:46.781426   10974 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:46.781476   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:46.781519   10974 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:46.781530   10974 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:46.781911   10974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:46.939302   10974 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:47.057865   10974 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:47.057872   10974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:47.058102   10974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:47.067636   10974 main.go:141] libmachine: STDOUT: 
	I0807 11:11:47.067652   10974 main.go:141] libmachine: STDERR: 
	I0807 11:11:47.067698   10974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2 +20000M
	I0807 11:11:47.075632   10974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:47.075647   10974 main.go:141] libmachine: STDERR: 
	I0807 11:11:47.075657   10974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:47.075662   10974 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:47.075671   10974 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:47.075712   10974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:99:44:c2:77:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:47.077429   10974 main.go:141] libmachine: STDOUT: 
	I0807 11:11:47.077441   10974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:47.077453   10974 client.go:171] duration metric: took 296.189209ms to LocalClient.Create
	I0807 11:11:48.102120   10974 cache.go:157] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0807 11:11:48.102161   10974 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.702401708s
	I0807 11:11:48.102175   10974 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0807 11:11:48.102199   10974 cache.go:87] Successfully saved all images to host disk.
	I0807 11:11:49.079706   10974 start.go:128] duration metric: took 2.335676625s to createHost
	I0807 11:11:49.079802   10974 start.go:83] releasing machines lock for "no-preload-641000", held for 2.335952167s
	W0807 11:11:49.080127   10974 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:49.092857   10974 out.go:177] 
	W0807 11:11:49.095952   10974 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:49.095985   10974 out.go:239] * 
	* 
	W0807 11:11:49.097598   10974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:49.107637   10974 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (41.872833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-641000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-641000 create -f testdata/busybox.yaml: exit status 1 (27.922708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-641000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.618375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (30.19925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-641000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system: exit status 1 (27.047667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (29.119959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.189305958s)

                                                
                                                
-- stdout --
	* [no-preload-641000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-641000" primary control-plane node in "no-preload-641000" cluster
	* Restarting existing qemu2 VM for "no-preload-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:52.998056   11052 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:52.998174   11052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:52.998177   11052 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:52.998179   11052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:52.998305   11052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:52.999338   11052 out.go:298] Setting JSON to false
	I0807 11:11:53.015652   11052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6082,"bootTime":1723048231,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:53.015725   11052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:53.020042   11052 out.go:177] * [no-preload-641000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:53.027193   11052 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:53.027291   11052 notify.go:220] Checking for updates...
	I0807 11:11:53.033247   11052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:53.036243   11052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:53.037393   11052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:53.040211   11052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:53.043261   11052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:53.046447   11052 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0807 11:11:53.046703   11052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:53.051200   11052 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:11:53.058268   11052 start.go:297] selected driver: qemu2
	I0807 11:11:53.058278   11052 start.go:901] validating driver "qemu2" against &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:53.058347   11052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:53.060674   11052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:53.060717   11052 cni.go:84] Creating CNI manager for ""
	I0807 11:11:53.060724   11052 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:11:53.060766   11052 start.go:340] cluster config:
	{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-641000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:53.064300   11052 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.071202   11052 out.go:177] * Starting "no-preload-641000" primary control-plane node in "no-preload-641000" cluster
	I0807 11:11:53.075194   11052 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 11:11:53.075269   11052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/no-preload-641000/config.json ...
	I0807 11:11:53.075426   11052 cache.go:107] acquiring lock: {Name:mk5e2b6546238d7c0154921386382b701b23a45a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075437   11052 cache.go:107] acquiring lock: {Name:mka569ae296b016ecfc36e0c6198cf36dab462e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075454   11052 cache.go:107] acquiring lock: {Name:mk4b2640aa201e24a264458b45308f582044f18e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075502   11052 cache.go:107] acquiring lock: {Name:mkfa114da60d1b0879f80afb60ab32a6c36adf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075496   11052 cache.go:107] acquiring lock: {Name:mk82d7fd6abc503641e8d7d5f6c07f8a09541f7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075520   11052 cache.go:107] acquiring lock: {Name:mkaacf585d2d3167ba4108499d686cf8eec9524a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075544   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0807 11:11:53.075552   11052 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 53.416µs
	I0807 11:11:53.075553   11052 cache.go:107] acquiring lock: {Name:mk9a9d919c6d46370bf5429cc34776afe2ff4b1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075558   11052 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0807 11:11:53.075488   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0807 11:11:53.075564   11052 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.208µs
	I0807 11:11:53.075566   11052 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0807 11:11:53.075572   11052 cache.go:107] acquiring lock: {Name:mk6a6038128ee9746bfb635f75171628fa8461d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.075490   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0807 11:11:53.075581   11052 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 148.375µs
	I0807 11:11:53.075586   11052 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0807 11:11:53.075645   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0807 11:11:53.075652   11052 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 80.458µs
	I0807 11:11:53.075656   11052 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0807 11:11:53.075675   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0807 11:11:53.075678   11052 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 150.959µs
	I0807 11:11:53.075681   11052 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0807 11:11:53.075710   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0807 11:11:53.075716   11052 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 288.5µs
	I0807 11:11:53.075720   11052 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0807 11:11:53.075759   11052 start.go:360] acquireMachinesLock for no-preload-641000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:53.075755   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0807 11:11:53.075767   11052 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 317.958µs
	I0807 11:11:53.075771   11052 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0807 11:11:53.075785   11052 cache.go:115] /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0807 11:11:53.075790   11052 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 276.458µs
	I0807 11:11:53.075797   11052 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "no-preload-641000"
	I0807 11:11:53.075806   11052 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:11:53.075811   11052 fix.go:54] fixHost starting: 
	I0807 11:11:53.075798   11052 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0807 11:11:53.075834   11052 cache.go:87] Successfully saved all images to host disk.
	I0807 11:11:53.075918   11052 fix.go:112] recreateIfNeeded on no-preload-641000: state=Stopped err=<nil>
	W0807 11:11:53.075926   11052 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:11:53.086191   11052 out.go:177] * Restarting existing qemu2 VM for "no-preload-641000" ...
	I0807 11:11:53.090290   11052 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:53.090332   11052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:99:44:c2:77:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:53.092370   11052 main.go:141] libmachine: STDOUT: 
	I0807 11:11:53.092392   11052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:53.092416   11052 fix.go:56] duration metric: took 16.605375ms for fixHost
	I0807 11:11:53.092422   11052 start.go:83] releasing machines lock for "no-preload-641000", held for 16.62125ms
	W0807 11:11:53.092431   11052 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:53.092467   11052 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:53.092473   11052 start.go:729] Will try again in 5 seconds ...
	I0807 11:11:58.094675   11052 start.go:360] acquireMachinesLock for no-preload-641000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:58.095103   11052 start.go:364] duration metric: took 346.958µs to acquireMachinesLock for "no-preload-641000"
	I0807 11:11:58.095220   11052 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:11:58.095247   11052 fix.go:54] fixHost starting: 
	I0807 11:11:58.096042   11052 fix.go:112] recreateIfNeeded on no-preload-641000: state=Stopped err=<nil>
	W0807 11:11:58.096068   11052 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:11:58.111596   11052 out.go:177] * Restarting existing qemu2 VM for "no-preload-641000" ...
	I0807 11:11:58.114613   11052 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:58.114804   11052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:99:44:c2:77:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/no-preload-641000/disk.qcow2
	I0807 11:11:58.124353   11052 main.go:141] libmachine: STDOUT: 
	I0807 11:11:58.124438   11052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:58.124545   11052 fix.go:56] duration metric: took 29.30375ms for fixHost
	I0807 11:11:58.124565   11052 start.go:83] releasing machines lock for "no-preload-641000", held for 29.438917ms
	W0807 11:11:58.124777   11052 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:58.133475   11052 out.go:177] 
	W0807 11:11:58.136661   11052 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:11:58.136686   11052 out.go:239] * 
	* 
	W0807 11:11:58.139064   11052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:11:58.147439   11052 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (65.486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.402761042s)

                                                
                                                
-- stdout --
	* [embed-certs-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-332000" primary control-plane node in "embed-certs-332000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-332000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:53.233324   11062 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:53.233459   11062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:53.233462   11062 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:53.233465   11062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:53.233592   11062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:53.234690   11062 out.go:298] Setting JSON to false
	I0807 11:11:53.250703   11062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6082,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:53.250771   11062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:53.255233   11062 out.go:177] * [embed-certs-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:53.262260   11062 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:53.262326   11062 notify.go:220] Checking for updates...
	I0807 11:11:53.268312   11062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:53.271230   11062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:53.274256   11062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:53.277199   11062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:53.280265   11062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:53.283487   11062 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:53.283564   11062 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0807 11:11:53.283606   11062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:53.288254   11062 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:11:53.303242   11062 start.go:297] selected driver: qemu2
	I0807 11:11:53.303255   11062 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:11:53.303270   11062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:53.305604   11062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:11:53.308331   11062 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:11:53.311369   11062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:53.311391   11062 cni.go:84] Creating CNI manager for ""
	I0807 11:11:53.311404   11062 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:11:53.311413   11062 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:11:53.311438   11062 start.go:340] cluster config:
	{Name:embed-certs-332000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:53.315045   11062 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:53.322226   11062 out.go:177] * Starting "embed-certs-332000" primary control-plane node in "embed-certs-332000" cluster
	I0807 11:11:53.326261   11062 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:11:53.326279   11062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:11:53.326288   11062 cache.go:56] Caching tarball of preloaded images
	I0807 11:11:53.326385   11062 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:11:53.326415   11062 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:11:53.326488   11062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/embed-certs-332000/config.json ...
	I0807 11:11:53.326503   11062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/embed-certs-332000/config.json: {Name:mkfc0b119e3224a1f4b3d517f6e37d41a4a13a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:11:53.326958   11062 start.go:360] acquireMachinesLock for embed-certs-332000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:53.326997   11062 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "embed-certs-332000"
	I0807 11:11:53.327009   11062 start.go:93] Provisioning new machine with config: &{Name:embed-certs-332000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:53.327057   11062 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:53.336230   11062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:53.354846   11062 start.go:159] libmachine.API.Create for "embed-certs-332000" (driver="qemu2")
	I0807 11:11:53.354874   11062 client.go:168] LocalClient.Create starting
	I0807 11:11:53.354936   11062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:53.354970   11062 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:53.354980   11062 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:53.355015   11062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:53.355038   11062 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:53.355049   11062 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:53.355515   11062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:53.511150   11062 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:53.588524   11062 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:53.588529   11062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:53.588730   11062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:11:53.597684   11062 main.go:141] libmachine: STDOUT: 
	I0807 11:11:53.597704   11062 main.go:141] libmachine: STDERR: 
	I0807 11:11:53.597745   11062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2 +20000M
	I0807 11:11:53.605499   11062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:53.605513   11062 main.go:141] libmachine: STDERR: 
	I0807 11:11:53.605528   11062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:11:53.605532   11062 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:53.605543   11062 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:53.605575   11062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:25:7c:97:96:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:11:53.607183   11062 main.go:141] libmachine: STDOUT: 
	I0807 11:11:53.607197   11062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:53.607219   11062 client.go:171] duration metric: took 252.341125ms to LocalClient.Create
	I0807 11:11:55.609444   11062 start.go:128] duration metric: took 2.282364334s to createHost
	I0807 11:11:55.609519   11062 start.go:83] releasing machines lock for "embed-certs-332000", held for 2.282544042s
	W0807 11:11:55.609568   11062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:55.625745   11062 out.go:177] * Deleting "embed-certs-332000" in qemu2 ...
	W0807 11:11:55.653068   11062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:11:55.653098   11062 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:00.655199   11062 start.go:360] acquireMachinesLock for embed-certs-332000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:01.235600   11062 start.go:364] duration metric: took 580.2975ms to acquireMachinesLock for "embed-certs-332000"
	I0807 11:12:01.235764   11062 start.go:93] Provisioning new machine with config: &{Name:embed-certs-332000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:12:01.236029   11062 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:12:01.251641   11062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:12:01.300621   11062 start.go:159] libmachine.API.Create for "embed-certs-332000" (driver="qemu2")
	I0807 11:12:01.300667   11062 client.go:168] LocalClient.Create starting
	I0807 11:12:01.300794   11062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:12:01.300852   11062 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:01.300866   11062 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:01.300926   11062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:12:01.300969   11062 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:01.300980   11062 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:01.301576   11062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:12:01.466215   11062 main.go:141] libmachine: Creating SSH key...
	I0807 11:12:01.537405   11062 main.go:141] libmachine: Creating Disk image...
	I0807 11:12:01.537410   11062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:12:01.537616   11062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:12:01.547040   11062 main.go:141] libmachine: STDOUT: 
	I0807 11:12:01.547055   11062 main.go:141] libmachine: STDERR: 
	I0807 11:12:01.547120   11062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2 +20000M
	I0807 11:12:01.554888   11062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:12:01.554902   11062 main.go:141] libmachine: STDERR: 
	I0807 11:12:01.554913   11062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:12:01.554916   11062 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:12:01.554927   11062 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:01.554961   11062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:28:2e:e0:70:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:12:01.556548   11062 main.go:141] libmachine: STDOUT: 
	I0807 11:12:01.556562   11062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:01.556583   11062 client.go:171] duration metric: took 255.913584ms to LocalClient.Create
	I0807 11:12:03.558756   11062 start.go:128] duration metric: took 2.32273225s to createHost
	I0807 11:12:03.558797   11062 start.go:83] releasing machines lock for "embed-certs-332000", held for 2.323203083s
	W0807 11:12:03.559127   11062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:03.577638   11062 out.go:177] 
	W0807 11:12:03.581400   11062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:03.581423   11062 out.go:239] * 
	* 
	W0807 11:12:03.583918   11062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:03.593515   11062 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (63.684125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-641000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (31.664958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-641000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.026542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.651584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-641000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.053959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1: exit status 83 (39.553417ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-641000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-641000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:58.409870   11084 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:58.410034   11084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:58.410037   11084 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:58.410040   11084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:58.410190   11084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:58.410410   11084 out.go:298] Setting JSON to false
	I0807 11:11:58.410417   11084 mustload.go:65] Loading cluster: no-preload-641000
	I0807 11:11:58.410600   11084 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0807 11:11:58.415366   11084 out.go:177] * The control-plane node no-preload-641000 host is not running: state=Stopped
	I0807 11:11:58.418356   11084 out.go:177]   To start a cluster, run: "minikube start -p no-preload-641000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.636667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.571542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.883502917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-240000" primary control-plane node in "default-k8s-diff-port-240000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-240000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:11:58.826401   11108 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:11:58.826521   11108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:58.826525   11108 out.go:304] Setting ErrFile to fd 2...
	I0807 11:11:58.826527   11108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:11:58.826653   11108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:11:58.827693   11108 out.go:298] Setting JSON to false
	I0807 11:11:58.843852   11108 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6087,"bootTime":1723048231,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:11:58.843929   11108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:11:58.849359   11108 out.go:177] * [default-k8s-diff-port-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:11:58.857365   11108 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:11:58.857421   11108 notify.go:220] Checking for updates...
	I0807 11:11:58.864207   11108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:11:58.867302   11108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:11:58.870294   11108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:11:58.873215   11108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:11:58.876278   11108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:11:58.879607   11108 config.go:182] Loaded profile config "embed-certs-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:58.879666   11108 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:11:58.879718   11108 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:11:58.884263   11108 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:11:58.891302   11108 start.go:297] selected driver: qemu2
	I0807 11:11:58.891308   11108 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:11:58.891314   11108 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:11:58.893668   11108 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 11:11:58.898240   11108 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:11:58.901346   11108 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:11:58.901380   11108 cni.go:84] Creating CNI manager for ""
	I0807 11:11:58.901387   11108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:11:58.901391   11108 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:11:58.901426   11108 start.go:340] cluster config:
	{Name:default-k8s-diff-port-240000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:11:58.905292   11108 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:11:58.914245   11108 out.go:177] * Starting "default-k8s-diff-port-240000" primary control-plane node in "default-k8s-diff-port-240000" cluster
	I0807 11:11:58.918270   11108 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:11:58.918285   11108 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:11:58.918295   11108 cache.go:56] Caching tarball of preloaded images
	I0807 11:11:58.918353   11108 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:11:58.918360   11108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:11:58.918420   11108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/default-k8s-diff-port-240000/config.json ...
	I0807 11:11:58.918432   11108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/default-k8s-diff-port-240000/config.json: {Name:mkb71030e2b64d4c3861d14d4f5b43e3a09239ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:11:58.918673   11108 start.go:360] acquireMachinesLock for default-k8s-diff-port-240000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:11:58.918711   11108 start.go:364] duration metric: took 30.209µs to acquireMachinesLock for "default-k8s-diff-port-240000"
	I0807 11:11:58.918723   11108 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:11:58.918757   11108 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:11:58.927335   11108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:11:58.945691   11108 start.go:159] libmachine.API.Create for "default-k8s-diff-port-240000" (driver="qemu2")
	I0807 11:11:58.945731   11108 client.go:168] LocalClient.Create starting
	I0807 11:11:58.945797   11108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:11:58.945827   11108 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:58.945836   11108 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:58.945879   11108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:11:58.945902   11108 main.go:141] libmachine: Decoding PEM data...
	I0807 11:11:58.945908   11108 main.go:141] libmachine: Parsing certificate...
	I0807 11:11:58.946252   11108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:11:59.101899   11108 main.go:141] libmachine: Creating SSH key...
	I0807 11:11:59.214205   11108 main.go:141] libmachine: Creating Disk image...
	I0807 11:11:59.214211   11108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:11:59.214422   11108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:11:59.223746   11108 main.go:141] libmachine: STDOUT: 
	I0807 11:11:59.223762   11108 main.go:141] libmachine: STDERR: 
	I0807 11:11:59.223802   11108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2 +20000M
	I0807 11:11:59.231529   11108 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:11:59.231545   11108 main.go:141] libmachine: STDERR: 
	I0807 11:11:59.231556   11108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:11:59.231561   11108 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:11:59.231574   11108 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:11:59.231595   11108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1d:dd:58:fc:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:11:59.233188   11108 main.go:141] libmachine: STDOUT: 
	I0807 11:11:59.233210   11108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:11:59.233230   11108 client.go:171] duration metric: took 287.498042ms to LocalClient.Create
	I0807 11:12:01.235384   11108 start.go:128] duration metric: took 2.316640625s to createHost
	I0807 11:12:01.235432   11108 start.go:83] releasing machines lock for "default-k8s-diff-port-240000", held for 2.316742542s
	W0807 11:12:01.235499   11108 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:01.263682   11108 out.go:177] * Deleting "default-k8s-diff-port-240000" in qemu2 ...
	W0807 11:12:01.284527   11108 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:01.284551   11108 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:06.286629   11108 start.go:360] acquireMachinesLock for default-k8s-diff-port-240000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:06.287122   11108 start.go:364] duration metric: took 405.333µs to acquireMachinesLock for "default-k8s-diff-port-240000"
	I0807 11:12:06.287192   11108 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:12:06.287494   11108 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:12:06.296960   11108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:12:06.347700   11108 start.go:159] libmachine.API.Create for "default-k8s-diff-port-240000" (driver="qemu2")
	I0807 11:12:06.347751   11108 client.go:168] LocalClient.Create starting
	I0807 11:12:06.347864   11108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:12:06.347916   11108 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:06.347934   11108 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:06.347994   11108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:12:06.348026   11108 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:06.348038   11108 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:06.348727   11108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:12:06.515374   11108 main.go:141] libmachine: Creating SSH key...
	I0807 11:12:06.617045   11108 main.go:141] libmachine: Creating Disk image...
	I0807 11:12:06.617051   11108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:12:06.617255   11108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:12:06.626562   11108 main.go:141] libmachine: STDOUT: 
	I0807 11:12:06.626579   11108 main.go:141] libmachine: STDERR: 
	I0807 11:12:06.626629   11108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2 +20000M
	I0807 11:12:06.634442   11108 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:12:06.634460   11108 main.go:141] libmachine: STDERR: 
	I0807 11:12:06.634474   11108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:12:06.634477   11108 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:12:06.634495   11108 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:06.634530   11108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c1:d7:9d:41:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:12:06.636176   11108 main.go:141] libmachine: STDOUT: 
	I0807 11:12:06.636193   11108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:06.636215   11108 client.go:171] duration metric: took 288.463417ms to LocalClient.Create
	I0807 11:12:08.638372   11108 start.go:128] duration metric: took 2.350879958s to createHost
	I0807 11:12:08.638436   11108 start.go:83] releasing machines lock for "default-k8s-diff-port-240000", held for 2.351322166s
	W0807 11:12:08.638746   11108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:08.652240   11108 out.go:177] 
	W0807 11:12:08.659417   11108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:08.659461   11108 out.go:239] * 
	* 
	W0807 11:12:08.662302   11108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:08.669274   11108 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (62.133375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-332000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-332000 create -f testdata/busybox.yaml: exit status 1 (29.811708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-332000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (27.981708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-332000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-332000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-332000 describe deploy/metrics-server -n kube-system: exit status 1 (26.832416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-332000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.920792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.210189125s)

                                                
                                                
-- stdout --
	* [embed-certs-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-332000" primary control-plane node in "embed-certs-332000" cluster
	* Restarting existing qemu2 VM for "embed-certs-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:07.546548   11165 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:07.546667   11165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:07.546670   11165 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:07.546672   11165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:07.546806   11165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:07.547850   11165 out.go:298] Setting JSON to false
	I0807 11:12:07.563756   11165 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6096,"bootTime":1723048231,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:12:07.563820   11165 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:12:07.568286   11165 out.go:177] * [embed-certs-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:12:07.574300   11165 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:12:07.574356   11165 notify.go:220] Checking for updates...
	I0807 11:12:07.581191   11165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:12:07.584153   11165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:12:07.587211   11165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:12:07.590199   11165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:12:07.591713   11165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:12:07.595436   11165 config.go:182] Loaded profile config "embed-certs-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:07.595696   11165 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:12:07.600157   11165 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:12:07.606247   11165 start.go:297] selected driver: qemu2
	I0807 11:12:07.606254   11165 start.go:901] validating driver "qemu2" against &{Name:embed-certs-332000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:07.606328   11165 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:12:07.608537   11165 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:12:07.608562   11165 cni.go:84] Creating CNI manager for ""
	I0807 11:12:07.608569   11165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:12:07.608589   11165 start.go:340] cluster config:
	{Name:embed-certs-332000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-332000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:07.611871   11165 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:12:07.619182   11165 out.go:177] * Starting "embed-certs-332000" primary control-plane node in "embed-certs-332000" cluster
	I0807 11:12:07.623179   11165 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:12:07.623196   11165 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:12:07.623207   11165 cache.go:56] Caching tarball of preloaded images
	I0807 11:12:07.623276   11165 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:12:07.623282   11165 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:12:07.623336   11165 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/embed-certs-332000/config.json ...
	I0807 11:12:07.623888   11165 start.go:360] acquireMachinesLock for embed-certs-332000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:08.638577   11165 start.go:364] duration metric: took 1.014680333s to acquireMachinesLock for "embed-certs-332000"
	I0807 11:12:08.638732   11165 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:08.638788   11165 fix.go:54] fixHost starting: 
	I0807 11:12:08.639447   11165 fix.go:112] recreateIfNeeded on embed-certs-332000: state=Stopped err=<nil>
	W0807 11:12:08.639497   11165 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:08.655179   11165 out.go:177] * Restarting existing qemu2 VM for "embed-certs-332000" ...
	I0807 11:12:08.662428   11165 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:08.662604   11165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:28:2e:e0:70:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:12:08.672232   11165 main.go:141] libmachine: STDOUT: 
	I0807 11:12:08.672314   11165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:08.672420   11165 fix.go:56] duration metric: took 33.645084ms for fixHost
	I0807 11:12:08.672438   11165 start.go:83] releasing machines lock for "embed-certs-332000", held for 33.825916ms
	W0807 11:12:08.672470   11165 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:08.672610   11165 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:08.672626   11165 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:13.674757   11165 start.go:360] acquireMachinesLock for embed-certs-332000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:13.675121   11165 start.go:364] duration metric: took 288.459µs to acquireMachinesLock for "embed-certs-332000"
	I0807 11:12:13.675237   11165 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:13.675257   11165 fix.go:54] fixHost starting: 
	I0807 11:12:13.676011   11165 fix.go:112] recreateIfNeeded on embed-certs-332000: state=Stopped err=<nil>
	W0807 11:12:13.676042   11165 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:13.680590   11165 out.go:177] * Restarting existing qemu2 VM for "embed-certs-332000" ...
	I0807 11:12:13.688449   11165 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:13.688635   11165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:28:2e:e0:70:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/embed-certs-332000/disk.qcow2
	I0807 11:12:13.697708   11165 main.go:141] libmachine: STDOUT: 
	I0807 11:12:13.697765   11165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:13.697847   11165 fix.go:56] duration metric: took 22.593958ms for fixHost
	I0807 11:12:13.697859   11165 start.go:83] releasing machines lock for "embed-certs-332000", held for 22.718375ms
	W0807 11:12:13.698039   11165 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:13.706406   11165 out.go:177] 
	W0807 11:12:13.710486   11165 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:13.710507   11165 out.go:239] * 
	* 
	W0807 11:12:13.713145   11165 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:13.720400   11165 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-332000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (68.159333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-240000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-240000 create -f testdata/busybox.yaml: exit status 1 (29.645875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-240000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-240000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.119958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.937625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-240000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-240000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-240000 describe deploy/metrics-server -n kube-system: exit status 1 (26.832708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-240000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-240000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.918292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.1889675s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-240000" primary control-plane node in "default-k8s-diff-port-240000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:12.924336   11208 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:12.924465   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:12.924468   11208 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:12.924471   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:12.924592   11208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:12.925584   11208 out.go:298] Setting JSON to false
	I0807 11:12:12.941417   11208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6101,"bootTime":1723048231,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:12:12.941483   11208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:12:12.945288   11208 out.go:177] * [default-k8s-diff-port-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:12:12.953296   11208 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:12:12.953362   11208 notify.go:220] Checking for updates...
	I0807 11:12:12.959269   11208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:12:12.962297   11208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:12:12.965262   11208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:12:12.968274   11208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:12:12.971234   11208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:12:12.974491   11208 config.go:182] Loaded profile config "default-k8s-diff-port-240000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:12.974796   11208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:12:12.979287   11208 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:12:12.986244   11208 start.go:297] selected driver: qemu2
	I0807 11:12:12.986250   11208 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:12.986298   11208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:12:12.988708   11208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 11:12:12.988734   11208 cni.go:84] Creating CNI manager for ""
	I0807 11:12:12.988741   11208 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:12:12.988764   11208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-240000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:12.992302   11208 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:12:13.001253   11208 out.go:177] * Starting "default-k8s-diff-port-240000" primary control-plane node in "default-k8s-diff-port-240000" cluster
	I0807 11:12:13.005333   11208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 11:12:13.005346   11208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 11:12:13.005353   11208 cache.go:56] Caching tarball of preloaded images
	I0807 11:12:13.005414   11208 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:12:13.005419   11208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 11:12:13.005470   11208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/default-k8s-diff-port-240000/config.json ...
	I0807 11:12:13.006029   11208 start.go:360] acquireMachinesLock for default-k8s-diff-port-240000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:13.006058   11208 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "default-k8s-diff-port-240000"
	I0807 11:12:13.006067   11208 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:13.006072   11208 fix.go:54] fixHost starting: 
	I0807 11:12:13.006193   11208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-240000: state=Stopped err=<nil>
	W0807 11:12:13.006201   11208 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:13.009286   11208 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-240000" ...
	I0807 11:12:13.016266   11208 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:13.016305   11208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c1:d7:9d:41:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:12:13.018196   11208 main.go:141] libmachine: STDOUT: 
	I0807 11:12:13.018216   11208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:13.018245   11208 fix.go:56] duration metric: took 12.172041ms for fixHost
	I0807 11:12:13.018249   11208 start.go:83] releasing machines lock for "default-k8s-diff-port-240000", held for 12.186459ms
	W0807 11:12:13.018255   11208 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:13.018310   11208 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:13.018314   11208 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:18.020428   11208 start.go:360] acquireMachinesLock for default-k8s-diff-port-240000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:18.020932   11208 start.go:364] duration metric: took 416.125µs to acquireMachinesLock for "default-k8s-diff-port-240000"
	I0807 11:12:18.021048   11208 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:18.021069   11208 fix.go:54] fixHost starting: 
	I0807 11:12:18.021841   11208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-240000: state=Stopped err=<nil>
	W0807 11:12:18.021867   11208 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:18.038444   11208 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-240000" ...
	I0807 11:12:18.041340   11208 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:18.041586   11208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c1:d7:9d:41:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/default-k8s-diff-port-240000/disk.qcow2
	I0807 11:12:18.050685   11208 main.go:141] libmachine: STDOUT: 
	I0807 11:12:18.050771   11208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:18.050874   11208 fix.go:56] duration metric: took 29.805ms for fixHost
	I0807 11:12:18.050896   11208 start.go:83] releasing machines lock for "default-k8s-diff-port-240000", held for 29.938834ms
	W0807 11:12:18.051108   11208 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:18.059292   11208 out.go:177] 
	W0807 11:12:18.062409   11208 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:18.062440   11208 out.go:239] * 
	* 
	W0807 11:12:18.064862   11208 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:18.073339   11208 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-240000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (64.995917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-332000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (32.190292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-332000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.024667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.091791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-332000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.907708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-332000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-332000 --alsologtostderr -v=1: exit status 83 (40.657292ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-332000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-332000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:13.982759   11227 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:13.982919   11227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:13.982923   11227 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:13.982925   11227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:13.983041   11227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:13.983253   11227 out.go:298] Setting JSON to false
	I0807 11:12:13.983259   11227 mustload.go:65] Loading cluster: embed-certs-332000
	I0807 11:12:13.983445   11227 config.go:182] Loaded profile config "embed-certs-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:13.988298   11227 out.go:177] * The control-plane node embed-certs-332000 host is not running: state=Stopped
	I0807 11:12:13.992351   11227 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-332000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-332000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.723667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (28.864041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.900459917s)

                                                
                                                
-- stdout --
	* [newest-cni-319000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-319000" primary control-plane node in "newest-cni-319000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-319000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:14.309894   11244 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:14.310020   11244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:14.310024   11244 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:14.310026   11244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:14.310157   11244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:14.311242   11244 out.go:298] Setting JSON to false
	I0807 11:12:14.327158   11244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6103,"bootTime":1723048231,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:12:14.327231   11244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:12:14.331448   11244 out.go:177] * [newest-cni-319000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:12:14.341411   11244 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:12:14.341470   11244 notify.go:220] Checking for updates...
	I0807 11:12:14.348343   11244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:12:14.351396   11244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:12:14.354400   11244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:12:14.357363   11244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:12:14.360358   11244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:12:14.363696   11244 config.go:182] Loaded profile config "default-k8s-diff-port-240000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:14.363756   11244 config.go:182] Loaded profile config "multinode-190000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:14.363804   11244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:12:14.368316   11244 out.go:177] * Using the qemu2 driver based on user configuration
	I0807 11:12:14.375420   11244 start.go:297] selected driver: qemu2
	I0807 11:12:14.375426   11244 start.go:901] validating driver "qemu2" against <nil>
	I0807 11:12:14.375434   11244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:12:14.377851   11244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0807 11:12:14.377872   11244 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0807 11:12:14.386371   11244 out.go:177] * Automatically selected the socket_vmnet network
	I0807 11:12:14.389474   11244 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0807 11:12:14.389510   11244 cni.go:84] Creating CNI manager for ""
	I0807 11:12:14.389518   11244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:12:14.389522   11244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 11:12:14.389563   11244 start.go:340] cluster config:
	{Name:newest-cni-319000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-319000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:14.393405   11244 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:12:14.401408   11244 out.go:177] * Starting "newest-cni-319000" primary control-plane node in "newest-cni-319000" cluster
	I0807 11:12:14.405242   11244 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 11:12:14.405260   11244 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0807 11:12:14.405271   11244 cache.go:56] Caching tarball of preloaded images
	I0807 11:12:14.405337   11244 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:12:14.405342   11244 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0807 11:12:14.405413   11244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/newest-cni-319000/config.json ...
	I0807 11:12:14.405431   11244 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/newest-cni-319000/config.json: {Name:mk12856ea23f1ab6eba4117215ff86871faa745c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 11:12:14.405649   11244 start.go:360] acquireMachinesLock for newest-cni-319000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:14.405683   11244 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "newest-cni-319000"
	I0807 11:12:14.405694   11244 start.go:93] Provisioning new machine with config: &{Name:newest-cni-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-319000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:12:14.405720   11244 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:12:14.414220   11244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:12:14.431543   11244 start.go:159] libmachine.API.Create for "newest-cni-319000" (driver="qemu2")
	I0807 11:12:14.431572   11244 client.go:168] LocalClient.Create starting
	I0807 11:12:14.431631   11244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:12:14.431660   11244 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:14.431669   11244 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:14.431704   11244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:12:14.431727   11244 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:14.431733   11244 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:14.432096   11244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:12:14.588694   11244 main.go:141] libmachine: Creating SSH key...
	I0807 11:12:14.626884   11244 main.go:141] libmachine: Creating Disk image...
	I0807 11:12:14.626893   11244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:12:14.627073   11244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:14.636372   11244 main.go:141] libmachine: STDOUT: 
	I0807 11:12:14.636389   11244 main.go:141] libmachine: STDERR: 
	I0807 11:12:14.636440   11244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2 +20000M
	I0807 11:12:14.644486   11244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:12:14.644502   11244 main.go:141] libmachine: STDERR: 
	I0807 11:12:14.644519   11244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:14.644524   11244 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:12:14.644535   11244 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:14.644560   11244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e9:36:c8:0b:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:14.646237   11244 main.go:141] libmachine: STDOUT: 
	I0807 11:12:14.646253   11244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:14.646271   11244 client.go:171] duration metric: took 214.697083ms to LocalClient.Create
	I0807 11:12:16.648425   11244 start.go:128] duration metric: took 2.24271875s to createHost
	I0807 11:12:16.648476   11244 start.go:83] releasing machines lock for "newest-cni-319000", held for 2.242815916s
	W0807 11:12:16.648553   11244 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:16.667810   11244 out.go:177] * Deleting "newest-cni-319000" in qemu2 ...
	W0807 11:12:16.695928   11244 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:16.695950   11244 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:21.698110   11244 start.go:360] acquireMachinesLock for newest-cni-319000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:21.698728   11244 start.go:364] duration metric: took 491.708µs to acquireMachinesLock for "newest-cni-319000"
	I0807 11:12:21.698867   11244 start.go:93] Provisioning new machine with config: &{Name:newest-cni-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-319000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 11:12:21.699160   11244 start.go:125] createHost starting for "" (driver="qemu2")
	I0807 11:12:21.704756   11244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 11:12:21.754816   11244 start.go:159] libmachine.API.Create for "newest-cni-319000" (driver="qemu2")
	I0807 11:12:21.754893   11244 client.go:168] LocalClient.Create starting
	I0807 11:12:21.755017   11244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/ca.pem
	I0807 11:12:21.755074   11244 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:21.755091   11244 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:21.755160   11244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19389-6671/.minikube/certs/cert.pem
	I0807 11:12:21.755204   11244 main.go:141] libmachine: Decoding PEM data...
	I0807 11:12:21.755215   11244 main.go:141] libmachine: Parsing certificate...
	I0807 11:12:21.755766   11244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0807 11:12:21.920074   11244 main.go:141] libmachine: Creating SSH key...
	I0807 11:12:22.117513   11244 main.go:141] libmachine: Creating Disk image...
	I0807 11:12:22.117520   11244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0807 11:12:22.117747   11244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2.raw /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:22.127461   11244 main.go:141] libmachine: STDOUT: 
	I0807 11:12:22.127481   11244 main.go:141] libmachine: STDERR: 
	I0807 11:12:22.127547   11244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2 +20000M
	I0807 11:12:22.135411   11244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0807 11:12:22.135425   11244 main.go:141] libmachine: STDERR: 
	I0807 11:12:22.135435   11244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:22.135439   11244 main.go:141] libmachine: Starting QEMU VM...
	I0807 11:12:22.135454   11244 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:22.135478   11244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:90:2d:a2:36:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:22.137065   11244 main.go:141] libmachine: STDOUT: 
	I0807 11:12:22.137082   11244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:22.137093   11244 client.go:171] duration metric: took 382.198292ms to LocalClient.Create
	I0807 11:12:24.139238   11244 start.go:128] duration metric: took 2.440063583s to createHost
	I0807 11:12:24.139303   11244 start.go:83] releasing machines lock for "newest-cni-319000", held for 2.440586541s
	W0807 11:12:24.139636   11244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:24.150223   11244 out.go:177] 
	W0807 11:12:24.157162   11244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:24.157203   11244 out.go:239] * 
	* 
	W0807 11:12:24.159619   11244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:24.173163   11244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (65.350208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-319000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-240000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (31.155084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-240000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-240000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-240000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.144875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-240000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-240000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.243125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-240000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-240000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-240000 --alsologtostderr -v=1: exit status 83 (40.769167ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-240000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:18.335921   11275 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:18.336086   11275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:18.336090   11275 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:18.336092   11275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:18.336220   11275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:18.336452   11275 out.go:298] Setting JSON to false
	I0807 11:12:18.336459   11275 mustload.go:65] Loading cluster: default-k8s-diff-port-240000
	I0807 11:12:18.336650   11275 config.go:182] Loaded profile config "default-k8s-diff-port-240000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 11:12:18.340270   11275 out.go:177] * The control-plane node default-k8s-diff-port-240000 host is not running: state=Stopped
	I0807 11:12:18.344297   11275 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-240000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-240000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (28.553166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (27.594167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.179572125s)

                                                
                                                
-- stdout --
	* [newest-cni-319000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-319000" primary control-plane node in "newest-cni-319000" cluster
	* Restarting existing qemu2 VM for "newest-cni-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:28.223345   11323 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:28.223465   11323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:28.223468   11323 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:28.223471   11323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:28.223612   11323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:28.224614   11323 out.go:298] Setting JSON to false
	I0807 11:12:28.240539   11323 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6117,"bootTime":1723048231,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 11:12:28.240601   11323 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 11:12:28.244907   11323 out.go:177] * [newest-cni-319000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 11:12:28.250763   11323 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 11:12:28.250796   11323 notify.go:220] Checking for updates...
	I0807 11:12:28.257687   11323 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 11:12:28.260683   11323 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 11:12:28.263736   11323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 11:12:28.265140   11323 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 11:12:28.268726   11323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 11:12:28.271989   11323 config.go:182] Loaded profile config "newest-cni-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0807 11:12:28.272285   11323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 11:12:28.276550   11323 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 11:12:28.283693   11323 start.go:297] selected driver: qemu2
	I0807 11:12:28.283699   11323 start.go:901] validating driver "qemu2" against &{Name:newest-cni-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-319000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:28.283741   11323 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 11:12:28.286015   11323 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0807 11:12:28.286060   11323 cni.go:84] Creating CNI manager for ""
	I0807 11:12:28.286068   11323 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 11:12:28.286104   11323 start.go:340] cluster config:
	{Name:newest-cni-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-319000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 11:12:28.289611   11323 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 11:12:28.296671   11323 out.go:177] * Starting "newest-cni-319000" primary control-plane node in "newest-cni-319000" cluster
	I0807 11:12:28.300764   11323 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 11:12:28.300780   11323 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0807 11:12:28.300789   11323 cache.go:56] Caching tarball of preloaded images
	I0807 11:12:28.300852   11323 preload.go:172] Found /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 11:12:28.300859   11323 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0807 11:12:28.300929   11323 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/newest-cni-319000/config.json ...
	I0807 11:12:28.301462   11323 start.go:360] acquireMachinesLock for newest-cni-319000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:28.301498   11323 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "newest-cni-319000"
	I0807 11:12:28.301507   11323 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:28.301513   11323 fix.go:54] fixHost starting: 
	I0807 11:12:28.301634   11323 fix.go:112] recreateIfNeeded on newest-cni-319000: state=Stopped err=<nil>
	W0807 11:12:28.301644   11323 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:28.305751   11323 out.go:177] * Restarting existing qemu2 VM for "newest-cni-319000" ...
	I0807 11:12:28.313656   11323 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:28.313698   11323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:90:2d:a2:36:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:28.315744   11323 main.go:141] libmachine: STDOUT: 
	I0807 11:12:28.315766   11323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:28.315795   11323 fix.go:56] duration metric: took 14.28125ms for fixHost
	I0807 11:12:28.315805   11323 start.go:83] releasing machines lock for "newest-cni-319000", held for 14.303125ms
	W0807 11:12:28.315817   11323 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:28.315861   11323 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:28.315867   11323 start.go:729] Will try again in 5 seconds ...
	I0807 11:12:33.318015   11323 start.go:360] acquireMachinesLock for newest-cni-319000: {Name:mkcf59343c330b925470ebde4818f6eede3baa00 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 11:12:33.318531   11323 start.go:364] duration metric: took 362.959µs to acquireMachinesLock for "newest-cni-319000"
	I0807 11:12:33.318717   11323 start.go:96] Skipping create...Using existing machine configuration
	I0807 11:12:33.318741   11323 fix.go:54] fixHost starting: 
	I0807 11:12:33.319567   11323 fix.go:112] recreateIfNeeded on newest-cni-319000: state=Stopped err=<nil>
	W0807 11:12:33.319593   11323 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 11:12:33.328166   11323 out.go:177] * Restarting existing qemu2 VM for "newest-cni-319000" ...
	I0807 11:12:33.332017   11323 qemu.go:418] Using hvf for hardware acceleration
	I0807 11:12:33.332232   11323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:90:2d:a2:36:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19389-6671/.minikube/machines/newest-cni-319000/disk.qcow2
	I0807 11:12:33.341509   11323 main.go:141] libmachine: STDOUT: 
	I0807 11:12:33.341576   11323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0807 11:12:33.341653   11323 fix.go:56] duration metric: took 22.920042ms for fixHost
	I0807 11:12:33.341677   11323 start.go:83] releasing machines lock for "newest-cni-319000", held for 23.086ms
	W0807 11:12:33.341890   11323 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-319000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-319000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0807 11:12:33.349178   11323 out.go:177] 
	W0807 11:12:33.352157   11323 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0807 11:12:33.352218   11323 out.go:239] * 
	* 
	W0807 11:12:33.355008   11323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 11:12:33.362133   11323 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-319000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (67.342042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-319000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-319000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (30.437042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-319000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-319000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-319000 --alsologtostderr -v=1: exit status 83 (41.071334ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-319000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-319000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 11:12:33.546211   11339 out.go:291] Setting OutFile to fd 1 ...
	I0807 11:12:33.546366   11339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:33.546369   11339 out.go:304] Setting ErrFile to fd 2...
	I0807 11:12:33.546372   11339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 11:12:33.546481   11339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 11:12:33.546695   11339 out.go:298] Setting JSON to false
	I0807 11:12:33.546705   11339 mustload.go:65] Loading cluster: newest-cni-319000
	I0807 11:12:33.546919   11339 config.go:182] Loaded profile config "newest-cni-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0807 11:12:33.549981   11339 out.go:177] * The control-plane node newest-cni-319000 host is not running: state=Stopped
	I0807 11:12:33.553953   11339 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-319000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-319000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (29.219958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-319000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (30.474041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-319000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 16.49
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 12.72
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 11
48 TestErrorSpam/start 0.37
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 9.4
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.82
64 TestFunctional/serial/CacheCmd/cache/add_local 1.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.32
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.76
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.85
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 1
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.28
267 TestNoKubernetes/serial/Stop 3.23
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
284 TestStartStop/group/old-k8s-version/serial/Stop 1.87
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.5
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 3.53
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.83
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.76
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-143000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-143000: exit status 85 (95.240416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |          |
	|         | -p download-only-143000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 10:45:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 10:45:47.480344    7168 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:45:47.480478    7168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:45:47.480481    7168 out.go:304] Setting ErrFile to fd 2...
	I0807 10:45:47.480483    7168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:45:47.480619    7168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	W0807 10:45:47.480727    7168 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19389-6671/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19389-6671/.minikube/config/config.json: no such file or directory
	I0807 10:45:47.482005    7168 out.go:298] Setting JSON to true
	I0807 10:45:47.498535    7168 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4516,"bootTime":1723048231,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:45:47.498598    7168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:45:47.503965    7168 out.go:97] [download-only-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:45:47.504137    7168 notify.go:220] Checking for updates...
	W0807 10:45:47.504128    7168 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball: no such file or directory
	I0807 10:45:47.506902    7168 out.go:169] MINIKUBE_LOCATION=19389
	I0807 10:45:47.509918    7168 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:45:47.514937    7168 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:45:47.517987    7168 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:45:47.520971    7168 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	W0807 10:45:47.526858    7168 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 10:45:47.527036    7168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:45:47.529809    7168 out.go:97] Using the qemu2 driver based on user configuration
	I0807 10:45:47.529829    7168 start.go:297] selected driver: qemu2
	I0807 10:45:47.529843    7168 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:45:47.529898    7168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:45:47.532864    7168 out.go:169] Automatically selected the socket_vmnet network
	I0807 10:45:47.538266    7168 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0807 10:45:47.538363    7168 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:45:47.538415    7168 cni.go:84] Creating CNI manager for ""
	I0807 10:45:47.538434    7168 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 10:45:47.538489    7168 start.go:340] cluster config:
	{Name:download-only-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:45:47.542397    7168 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:45:47.545884    7168 out.go:97] Downloading VM boot image ...
	I0807 10:45:47.545899    7168 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0807 10:46:00.322530    7168 out.go:97] Starting "download-only-143000" primary control-plane node in "download-only-143000" cluster
	I0807 10:46:00.322558    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:00.380462    7168 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 10:46:00.380482    7168 cache.go:56] Caching tarball of preloaded images
	I0807 10:46:00.380625    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:00.386732    7168 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0807 10:46:00.386738    7168 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:00.474208    7168 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0807 10:46:17.005233    7168 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:17.005398    7168 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:17.700743    7168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 10:46:17.700940    7168 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/download-only-143000/config.json ...
	I0807 10:46:17.700959    7168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19389-6671/.minikube/profiles/download-only-143000/config.json: {Name:mk62558161899f20da00983e37b95b9b179e1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 10:46:17.701247    7168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 10:46:17.701449    7168 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0807 10:46:18.064320    7168 out.go:169] 
	W0807 10:46:18.070273    7168 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19389-6671/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20 0x10501dd20] Decompressors:map[bz2:0x14000893bf0 gz:0x14000893bf8 tar:0x14000893b80 tar.bz2:0x14000893b90 tar.gz:0x14000893ba0 tar.xz:0x14000893bb0 tar.zst:0x14000893bc0 tbz2:0x14000893b90 tgz:0x14000893ba0 txz:0x14000893bb0 tzst:0x14000893bc0 xz:0x14000893c10 zip:0x14000893c50 zst:0x14000893c18] Getters:map[file:0x14000cea1f0 http:0x14000880460 https:0x14000880500] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0807 10:46:18.070295    7168 out_reason.go:110] 
	W0807 10:46:18.077235    7168 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 10:46:18.081041    7168 out.go:169] 
	
	
	* The control-plane node download-only-143000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-143000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-143000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-616000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-616000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (16.492450125s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-616000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-616000: exit status 85 (78.328584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |                     |
	|         | -p download-only-143000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-143000        | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -o=json --download-only        | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | -p download-only-616000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 10:46:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 10:46:18.497393    7211 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:46:18.497555    7211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:18.497559    7211 out.go:304] Setting ErrFile to fd 2...
	I0807 10:46:18.497561    7211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:18.497698    7211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:46:18.498748    7211 out.go:298] Setting JSON to true
	I0807 10:46:18.514814    7211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4547,"bootTime":1723048231,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:46:18.514879    7211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:46:18.519819    7211 out.go:97] [download-only-616000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:46:18.519942    7211 notify.go:220] Checking for updates...
	I0807 10:46:18.522715    7211 out.go:169] MINIKUBE_LOCATION=19389
	I0807 10:46:18.525785    7211 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:46:18.530821    7211 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:46:18.533713    7211 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:46:18.536753    7211 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	W0807 10:46:18.541281    7211 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 10:46:18.541455    7211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:46:18.544814    7211 out.go:97] Using the qemu2 driver based on user configuration
	I0807 10:46:18.544824    7211 start.go:297] selected driver: qemu2
	I0807 10:46:18.544826    7211 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:46:18.544899    7211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:46:18.547761    7211 out.go:169] Automatically selected the socket_vmnet network
	I0807 10:46:18.552833    7211 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0807 10:46:18.552952    7211 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:46:18.552969    7211 cni.go:84] Creating CNI manager for ""
	I0807 10:46:18.552976    7211 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:46:18.552982    7211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:46:18.553023    7211 start.go:340] cluster config:
	{Name:download-only-616000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:46:18.556492    7211 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:46:18.559779    7211 out.go:97] Starting "download-only-616000" primary control-plane node in "download-only-616000" cluster
	I0807 10:46:18.559788    7211 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:46:18.613147    7211 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:46:18.613189    7211 cache.go:56] Caching tarball of preloaded images
	I0807 10:46:18.613380    7211 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 10:46:18.618517    7211 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0807 10:46:18.618525    7211 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:18.692318    7211 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0807 10:46:33.055073    7211 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:33.055258    7211 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-616000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-616000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-616000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (12.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-658000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-658000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (12.724335167s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (12.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-658000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-658000: exit status 85 (76.522917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:45 PDT |                     |
	|         | -p download-only-143000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-143000           | download-only-143000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -o=json --download-only           | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | -p download-only-616000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| delete  | -p download-only-616000           | download-only-616000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT | 07 Aug 24 10:46 PDT |
	| start   | -o=json --download-only           | download-only-658000 | jenkins | v1.33.1 | 07 Aug 24 10:46 PDT |                     |
	|         | -p download-only-658000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 10:46:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 10:46:35.294856    7235 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:46:35.294978    7235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:35.294981    7235 out.go:304] Setting ErrFile to fd 2...
	I0807 10:46:35.294984    7235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:46:35.295106    7235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:46:35.296156    7235 out.go:298] Setting JSON to true
	I0807 10:46:35.312266    7235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4564,"bootTime":1723048231,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:46:35.312327    7235 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:46:35.316770    7235 out.go:97] [download-only-658000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:46:35.316860    7235 notify.go:220] Checking for updates...
	I0807 10:46:35.320729    7235 out.go:169] MINIKUBE_LOCATION=19389
	I0807 10:46:35.324638    7235 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:46:35.328710    7235 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:46:35.331737    7235 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:46:35.333217    7235 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	W0807 10:46:35.339737    7235 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 10:46:35.339874    7235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:46:35.342771    7235 out.go:97] Using the qemu2 driver based on user configuration
	I0807 10:46:35.342780    7235 start.go:297] selected driver: qemu2
	I0807 10:46:35.342783    7235 start.go:901] validating driver "qemu2" against <nil>
	I0807 10:46:35.342827    7235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 10:46:35.345746    7235 out.go:169] Automatically selected the socket_vmnet network
	I0807 10:46:35.350915    7235 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0807 10:46:35.351009    7235 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 10:46:35.351039    7235 cni.go:84] Creating CNI manager for ""
	I0807 10:46:35.351047    7235 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 10:46:35.351052    7235 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 10:46:35.351090    7235 start.go:340] cluster config:
	{Name:download-only-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-658000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:46:35.354479    7235 iso.go:125] acquiring lock: {Name:mk2dbeb6fba3d8b47a5bc647977ea81b3f992050 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 10:46:35.357769    7235 out.go:97] Starting "download-only-658000" primary control-plane node in "download-only-658000" cluster
	I0807 10:46:35.357775    7235 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 10:46:35.416292    7235 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0807 10:46:35.416304    7235 cache.go:56] Caching tarball of preloaded images
	I0807 10:46:35.416502    7235 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 10:46:35.420782    7235 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0807 10:46:35.420789    7235 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0807 10:46:35.501884    7235 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19389-6671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-658000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-658000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-658000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-238000 --alsologtostderr --binary-mirror http://127.0.0.1:51035 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-238000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-238000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-541000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-541000: exit status 85 (59.436375ms)

                                                
                                                
-- stdout --
	* Profile "addons-541000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-541000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-541000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-541000: exit status 85 (55.405958ms)

                                                
                                                
-- stdout --
	* Profile "addons-541000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-541000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status: exit status 7 (29.433375ms)

                                                
                                                
-- stdout --
	nospam-074000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status: exit status 7 (29.575ms)

                                                
                                                
-- stdout --
	nospam-074000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status: exit status 7 (28.958542ms)

                                                
                                                
-- stdout --
	nospam-074000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause: exit status 83 (39.10925ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause: exit status 83 (39.889125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause: exit status 83 (36.714792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause: exit status 83 (38.7235ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause: exit status 83 (40.097125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause: exit status 83 (37.9875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-074000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-074000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop: (3.486114791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop: (1.862418958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-074000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-074000 stop: (4.0493195s)
--- PASS: TestErrorSpam/stop (9.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19389-6671/.minikube/files/etc/test/nested/copy/7166/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3555573150/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add minikube-local-cache-test:functional-908000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache delete minikube-local-cache-test:functional-908000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (29.183542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (32.289083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (160.325958ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:48:29.909803    7867 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:48:29.910031    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:29.910038    7867 out.go:304] Setting ErrFile to fd 2...
	I0807 10:48:29.910041    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:29.910231    7867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:48:29.911879    7867 out.go:298] Setting JSON to false
	I0807 10:48:29.931667    7867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4678,"bootTime":1723048231,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:48:29.931752    7867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:48:29.936931    7867 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0807 10:48:29.943914    7867 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:48:29.943935    7867 notify.go:220] Checking for updates...
	I0807 10:48:29.950843    7867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:48:29.953814    7867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:48:29.956789    7867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:48:29.959798    7867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:48:29.962832    7867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:48:29.966151    7867 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:48:29.966452    7867 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:48:29.970713    7867 out.go:177] * Using the qemu2 driver based on existing profile
	I0807 10:48:29.977760    7867 start.go:297] selected driver: qemu2
	I0807 10:48:29.977767    7867 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:48:29.977836    7867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:48:29.984785    7867 out.go:177] 
	W0807 10:48:29.987722    7867 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0807 10:48:29.991772    7867 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.80225ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 10:48:30.136005    7878 out.go:291] Setting OutFile to fd 1 ...
	I0807 10:48:30.136107    7878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.136111    7878 out.go:304] Setting ErrFile to fd 2...
	I0807 10:48:30.136113    7878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 10:48:30.136233    7878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19389-6671/.minikube/bin
	I0807 10:48:30.137634    7878 out.go:298] Setting JSON to false
	I0807 10:48:30.154202    7878 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4679,"bootTime":1723048231,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0807 10:48:30.154294    7878 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 10:48:30.157801    7878 out.go:177] * [functional-908000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0807 10:48:30.164829    7878 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 10:48:30.164875    7878 notify.go:220] Checking for updates...
	I0807 10:48:30.171835    7878 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	I0807 10:48:30.174823    7878 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0807 10:48:30.177866    7878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 10:48:30.180805    7878 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	I0807 10:48:30.183853    7878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 10:48:30.187154    7878 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 10:48:30.187451    7878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 10:48:30.191804    7878 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0807 10:48:30.198725    7878 start.go:297] selected driver: qemu2
	I0807 10:48:30.198732    7878 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 10:48:30.198795    7878 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 10:48:30.205784    7878 out.go:177] 
	W0807 10:48:30.208777    7878 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0807 10:48:30.212755    7878 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.73445675s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image rm docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-908000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.791584ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.921833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.037458ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.639708ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012974208s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-908000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-845000 --output=json --user=testUser: (1.846172709s)
--- PASS: TestJSONOutput/stop/Command (1.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-953000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-953000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.799209ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a1bd2e9-bff5-47a9-bab0-f3980aaf0e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-953000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"206a7a66-d2f4-4a13-b05e-8a294ba27eec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"4635b6e1-7980-4339-ad4b-34d5fd07c0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig"}}
	{"specversion":"1.0","id":"f24f3157-fab8-44a9-8ffa-444cda2b50a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9165a5cf-02ce-4458-a5c9-b72d808a2c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"63505bed-0bea-4c55-b576-d45ecba93bde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube"}}
	{"specversion":"1.0","id":"e84dbd26-ce54-40c5-9c55-f08ca92cdf12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24c48f29-44a5-4618-863c-6dc163ab0cb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-953000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-673000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.751208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-673000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19389-6671/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19389-6671/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-673000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-673000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.634541ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-673000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-673000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.681400166s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.602226791s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-673000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-673000: (3.226779875s)
--- PASS: TestNoKubernetes/serial/Stop (3.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-673000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-673000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.459333ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-673000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-673000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-423000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-107000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-107000 --alsologtostderr -v=3: (1.8669405s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-107000 -n old-k8s-version-107000: exit status 7 (55.909792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-107000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-641000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-641000 --alsologtostderr -v=3: (3.499701625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (49.112708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-641000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-332000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-332000 --alsologtostderr -v=3: (3.527818792s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-332000 -n embed-certs-332000: exit status 7 (53.932166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-332000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-240000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-240000 --alsologtostderr -v=3: (3.828867708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-240000 -n default-k8s-diff-port-240000: exit status 7 (56.425125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-240000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-319000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-319000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-319000 --alsologtostderr -v=3: (3.757702917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-319000 -n newest-cni-319000: exit status 7 (60.102292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-319000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port234302193/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723052869951228000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port234302193/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723052869951228000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port234302193/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723052869951228000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port234302193/001/test-1723052869951228000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.447584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.120167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.135167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.058833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.080084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.081208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.947333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.29875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (43.798959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port234302193/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2498842715/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.689ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.306334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.716375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.501667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (80.432584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.157375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.444375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (47.643ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2498842715/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (82.884333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.10425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (85.261166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (86.106875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (85.40425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (85.54175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (86.704292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.261417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1249401038/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-921000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-921000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-921000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-921000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921000"

                                                
                                                
----------------------- debugLogs end: cilium-921000 [took: 2.163662542s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-921000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-357000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-357000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard